CN117372667A - Pose adjusting method and device of image acquisition assembly and controller - Google Patents

Pose adjusting method and device of image acquisition assembly and controller Download PDF

Info

Publication number
CN117372667A
CN117372667A CN202210766701.0A CN202210766701A CN117372667A CN 117372667 A CN117372667 A CN 117372667A CN 202210766701 A CN202210766701 A CN 202210766701A CN 117372667 A CN117372667 A CN 117372667A
Authority
CN
China
Prior art keywords
target
coordinate system
image
image acquisition
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210766701.0A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
王家寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202210766701.0A priority Critical patent/CN117372667A/en
Publication of CN117372667A publication Critical patent/CN117372667A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Robotics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Manipulator (AREA)

Abstract

The specification provides a pose adjusting method, a pose adjusting device and a pose adjusting controller of an image acquisition assembly, and relates to the technical field of medical equipment, wherein the pose adjusting method comprises the following steps: acquiring a target image formed according to signals acquired by an image acquisition component under the current pose; determining a reference coordinate system on the target image; acquiring a visual field adjustment instruction, wherein the visual field adjustment instruction is used for controlling the visual field of the image acquisition component to be adjusted relative to the reference coordinate system; determining an adjustment direction of an image acquisition component corresponding to the visual field adjustment instruction; and adjusting the pose of the image acquisition component according to the adjustment direction. According to the method and the device, the pose adjustment of the image acquisition component can be realized in the control mode of the operation tool, the pose adjustment mode of the image acquisition component is enriched, the pose of the image acquisition component can be quickly adjusted, and the overall operation efficiency is improved.

Description

Pose adjusting method and device of image acquisition assembly and controller
Technical Field
The application relates to the technical field of medical equipment, in particular to a pose adjusting method, a pose adjusting device and a controller of an image acquisition assembly.
Background
Some automation devices in the prior art require an operator to operate an operation tool on the automation device under a condition that the operator cannot see the real-time operation condition by naked eyes (for example, the operation result occurs in a tiny area which cannot be recognized by naked eyes, such as the operation result which can be recognized by naked eyes after a certain degree of amplification is required, or the operator is not located in the area where the operation result can be seen, such as in a different room). For this purpose, the controllable automation device is usually provided with an image acquisition component such as a camera for acquiring real-time operating conditions and feeding them back to the operator.
In order to more accurately view the real-time operation condition acquired by the image acquisition component, an operator generally needs to switch from an operation tool mode to a control mode of the image acquisition component in the process of controlling the operation tool to act, and control and adjust the pose of the image acquisition component in the control mode of the image acquisition component so as to adjust or replace the viewing angle, thereby acquiring more imaging details, i.e. acquiring a better image field of view. This makes the process of operating the automation device to complete the task cumbersome and the execution time of the task long.
Disclosure of Invention
The embodiment of the application aims to provide a pose adjusting method, a pose adjusting device and a controller of an image acquisition assembly, so as to solve the problems of complex operation process and low operation efficiency.
In order to solve the above technical problem, a first aspect of the present disclosure provides a pose adjustment method of an image acquisition assembly, including: acquiring a target image formed according to signals acquired by an image acquisition component under the current pose; determining a reference coordinate system on the target image; acquiring a visual field adjustment instruction, wherein the visual field adjustment instruction is used for controlling the visual field of the image acquisition component to be adjusted relative to the reference coordinate system; determining an adjustment direction of an image acquisition component corresponding to the visual field adjustment instruction; and controlling and adjusting the pose of the image acquisition component according to the adjustment direction.
In some embodiments, determining a reference coordinate system on the target image includes: acquiring a target position point on the target image; and establishing a reference coordinate system according to the target position point.
In some embodiments, acquiring a target location point on the target image includes: acquiring the position of a laser point in the target image, and taking the position of the laser point as a target position point; wherein the position of the laser spot is obtained by: acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment; and controlling the laser at the tail end of the target instrument to emit laser to the target position at the current moment by referring to the first image, and controlling the image acquisition assembly to acquire signals.
In some embodiments, acquiring a target location point on the target image includes: acquiring a target position positioning instruction; responding to the target position positioning instruction, acquiring the position of the tail end of the target instrument in the target image, and taking the position of the tail end of the target instrument as a target position point; wherein the position of the target instrument tip is obtained by: acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment; and controlling the tail end of the target instrument to move to the target position at the current moment by referring to the first image, and controlling the image acquisition assembly to acquire signals.
In some embodiments, acquiring a target location point on the target image includes: when the target image is displayed on the display screen, acquiring the position determined by the operator on the displayed target image; the position determined on the displayed target image is determined as a target position point on the target image.
In some embodiments, establishing a reference coordinate system from the target location point includes: determining a first position coordinate of a target position point under a target object coordinate system; the target object coordinate system is a coordinate system established by taking a preset point on the target object as a coordinate origin; determining a second position coordinate of a predetermined point on the target object under the base coordinate system; determining the position of the target position point under a base coordinate system according to the first position coordinate and the second position coordinate; and establishing a reference coordinate system according to the position of the target position point under the base coordinate system.
In some embodiments, determining a first position coordinate of the target position point in the target object coordinate system includes: acquiring pixel coordinates of a target position point in the target image; wherein the pixel coordinates are coordinates determined with one pixel as one length unit; converting the pixel coordinates into first position coordinates of the target position point under an image acquisition component coordinate system according to a preset conversion relation; the preset conversion relation is used for converting the pixel length in the target image into the actual length under the base standard.
In some embodiments, before converting the pixel coordinates into the first position coordinates of the target position point in the image capturing component coordinate system according to the preset conversion relation, the method further includes: the preset conversion relation is obtained according to the following mode: keeping the pose of the image acquisition assembly unchanged, controlling the tail end of the instrument to sequentially move to a plurality of position points in the visual field of the image acquisition assembly, and recording the basic coordinates of each position point under the base standard system; determining pixel coordinates of each of a plurality of location points in the target image; and determining the conversion relation between the pixels of the target image and the actual length under the base coordinate system according to the pixel coordinates of each position point in the target image and the base coordinates under the base coordinate system.
In some embodiments, establishing a reference coordinate system from the target location point includes: establishing a reference coordinate system by taking a target position point as an origin of the reference coordinate system and taking the coordinate axis direction of the image acquisition component coordinate system as the coordinate axis direction of the reference coordinate system; or, taking the first target position point as an origin of a reference coordinate system, taking the direction of the second target position point relative to the first target position point as an x-axis direction of the reference coordinate system, and taking the z-axis direction of the image acquisition component coordinate system as a z-axis direction of the reference coordinate system, so as to establish the reference coordinate system; or, the first target position point is taken as an origin of the reference coordinate system, the direction of the second target position point relative to the first target position point is taken as an x-axis direction of the reference coordinate system, and the normal direction of the plane where the first target position point, the second target position point and the third target position point are located is taken as a z-axis direction of the reference coordinate system, so that the reference coordinate system is established.
In some embodiments, obtaining the view adjustment instruction includes: acquiring a visual field translation or rotation instruction sent by an operator through a manipulator on an operation console; or displaying the target image, and displaying at least one virtual area in the upper, lower, left and right directions of the target image; when the tail end of the target instrument in the target image moves into the virtual area, a translation instruction corresponding to the position of the virtual area is generated.
In some embodiments, the method further comprises: when the pose of the image acquisition component is adjusted according to the adjustment direction of the image acquisition component, the position of the tail end of the target instrument is automatically adjusted, so that the relative pose of the tail end of the target instrument and the image acquisition component is kept consistent.
In some embodiments, when automatically adjusting the position of the target instrument tip such that the target instrument tip remains consistent with the relative pose of the image acquisition assembly, further comprising: and adjusting the position and the posture of the manipulator corresponding to the tail end of the target instrument recorded in the operation console in real time so as to keep the position and the posture of the manipulator consistent with the position and the posture of the tail end of the target instrument.
A second aspect of the present specification provides a pose adjustment device of an image acquisition assembly, comprising: the first acquisition unit is used for acquiring a target image formed according to signals acquired by the image acquisition component under the current pose; a first determining unit configured to determine a reference coordinate system on the target image; a second acquisition unit configured to acquire a field-of-view adjustment instruction for controlling adjustment of a field of view of the image acquisition component with respect to the reference coordinate system; a second determining unit, configured to determine an adjustment direction of the image acquisition component corresponding to the field adjustment instruction; and the control unit is used for controlling and adjusting the pose of the image acquisition component according to the adjustment direction.
In some embodiments, the first determining unit comprises: a first obtaining subunit, configured to obtain a target location point on the target image; and the first establishing subunit is used for establishing a reference coordinate system according to the target position point.
In some embodiments, the first acquisition subunit comprises: a second obtaining subunit, configured to obtain a position of a laser point in the target image, and take the position of the laser point as a target position point; wherein the position of the laser spot is obtained by: acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment; and controlling the laser at the tail end of the target instrument to emit laser to the target position at the current moment by referring to the first image, and controlling the image acquisition assembly to acquire signals.
In some embodiments, the first acquisition subunit comprises: the third acquisition subunit is used for acquiring a target position positioning instruction; a fourth obtaining subunit, configured to obtain, in response to the target position positioning instruction, a position of a distal end of the target instrument in the target image, and take the position of the distal end of the target instrument as a target position point; wherein the position of the target instrument tip is obtained by: acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment; and controlling the tail end of the target instrument to move to the target position at the current moment by referring to the first image, and controlling the image acquisition assembly to acquire signals.
In some embodiments, the first acquisition subunit comprises: and a fifth acquisition subunit for acquiring, when the target image is displayed on the display screen, a position determined by the operator on the displayed target image, and determining the position determined on the displayed target image as a target position point on the target image.
In some embodiments, the first setup subunit comprises: a first determining subunit, configured to determine a first position coordinate of the target position point in the target object coordinate system; the target object coordinate system is a coordinate system established by taking a preset point on the target object as a coordinate origin; a second determination subunit configured to determine a second position coordinate of a predetermined point on the target object in the base coordinate system; a third determining subunit, configured to determine, according to the first position coordinate and the second position coordinate, a position of the target position point in a base coordinate system; and the second establishing subunit is used for establishing a reference coordinate system according to the position of the target position point under the base coordinate system.
In some embodiments, the first determining subunit comprises: a sixth acquisition subunit, configured to acquire pixel coordinates of a target location point in the target image; wherein the pixel coordinates are coordinates determined with one pixel as one length unit; the conversion subunit is used for converting the pixel coordinates into first position coordinates of the target position point under the coordinate system of the image acquisition component according to a preset conversion relation; the preset conversion relation is used for converting the pixel length in the target image into the actual length under the base standard.
In some embodiments, the first determining subunit further comprises: the control subunit is used for keeping the pose of the image acquisition assembly unchanged, controlling the tail end of the instrument to sequentially move to a plurality of position points in the view field of the image acquisition assembly, and recording the basic coordinates of each position point under the base standard system; a fourth determination subunit configured to determine pixel coordinates of each of the plurality of location points in the target image; and a fifth determination subunit, configured to determine a conversion relationship between the pixel of the target image and the actual length in the base coordinate system according to the pixel coordinate of each position point in the target image and the base coordinate in the base coordinate system.
In some embodiments, the first setup subunit comprises: a third establishing subunit, configured to establish a reference coordinate system by using a target position point as an origin of the reference coordinate system and using a coordinate axis direction of the image acquisition component coordinate system as a coordinate axis direction of the reference coordinate system; or, a fourth establishing subunit, configured to establish a reference coordinate system with the first target position point as an origin of the reference coordinate system, with a direction of the second target position point relative to the first target position point as an x-axis direction of the reference coordinate system, and with a z-axis direction of the image acquisition component coordinate system as a z-axis direction of the reference coordinate system; or the fifth establishing subunit is configured to establish the reference coordinate system with the first target position point as an origin of the reference coordinate system, with a direction of the second target position point relative to the first target position point as an x-axis direction of the reference coordinate system, and with a normal direction of a plane in which the first target position point, the second target position point, and the third target position point are located as a z-axis direction of the reference coordinate system.
In some embodiments, the second acquisition unit comprises: a seventh acquisition subunit for acquiring a visual field translation or rotation instruction sent by an operator through a manipulator on the operation console; or, a display subunit, configured to display the target image, and display at least one virtual area in an upper, lower, left, and right direction of the target image; and the instruction generation subunit is used for generating a translation instruction corresponding to the virtual area when the tail end of the target instrument in the target image moves into the virtual area.
In some embodiments, the apparatus further comprises: and the first adjusting unit is used for automatically adjusting the position of the tail end of the target instrument when the pose of the image acquisition assembly is adjusted according to the adjusting direction of the image acquisition assembly, so that the relative pose of the tail end of the target instrument and the image acquisition assembly is kept consistent.
In some embodiments, further comprising: and the second adjusting unit is used for adjusting the manipulator pose corresponding to the target instrument end recorded in the operation console in real time when the position of the target instrument end is automatically adjusted so that the relative pose of the target instrument end and the image acquisition assembly is consistent, so that the pose of the manipulator is consistent with the pose of the target instrument end.
A third aspect of the present specification provides a robot comprising: a base; the first mechanical arm and the second mechanical arm are arranged on the base; the operation tool is arranged at the tail end of the first mechanical arm, and the first mechanical arm can act to drive the pose of the operation tool to change; the image acquisition assembly is arranged at the tail end of the second mechanical arm and is used for acquiring an image of the operation condition of the operation tool, and the second mechanical arm can drive the pose of the image acquisition assembly to change; an imaging assembly for forming an image from the signals acquired by the image acquisition assembly; a display for presenting an image formed by the imaging assembly; the controller is used for acquiring a target image formed according to the signals acquired by the image acquisition component under the current pose; determining a reference coordinate system on the target image; acquiring a visual field adjustment instruction, wherein the visual field adjustment instruction is used for controlling the visual field of the image acquisition component to be adjusted relative to the reference coordinate system; determining an adjustment direction of an image acquisition component corresponding to the visual field adjustment instruction; and controlling and adjusting the pose of the image acquisition component according to the adjustment direction.
A fourth aspect of the present specification provides a controller comprising: the system comprises a memory and a processor, wherein the processor and the memory are in communication connection, the memory stores computer instructions, and the processor realizes the steps of the method in any one of the first aspect by executing the computer instructions.
A fifth aspect of the present description provides a computer storage medium storing computer program instructions which, when executed, implement the steps of the method of any one of the first aspects.
The position and orientation adjusting method, device and controller for the image acquisition component provided by the specification are characterized in that a reference coordinate system is determined on a target image formed according to signals acquired by the image acquisition component, after a vision adjusting instruction is acquired, the adjusting direction of the image acquisition component corresponding to the vision adjusting instruction is determined, and then the position and orientation of the image acquisition component are controlled and adjusted according to the adjusting direction. According to the method and the device, the pose adjustment of the image acquisition component can be realized in the control mode of the operation tool, the pose adjustment mode of the image acquisition component is enriched, the pose of the image acquisition component can be quickly adjusted, and the overall operation efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a schematic perspective view of a surgical robotic system;
FIG. 2 shows a schematic diagram of a control-end device;
FIG. 3 shows another schematic diagram of a control-end device;
FIG. 4 illustrates a flow chart of a method of pose adjustment for an image acquisition assembly provided herein;
FIG. 5 illustrates a schematic view of a spatial pose coordinate system of an image acquisition assembly;
FIG. 6 illustrates an image acquired by the image acquisition assembly in the pose illustrated in FIG. 5;
FIG. 7 illustrates another flow chart of a method of pose adjustment for an image acquisition assembly provided herein;
FIG. 8 shows a schematic diagram of determining a target location point based on a target image displayed by a display screen;
FIG. 9 shows a schematic diagram of determining a target location point by emitting laser light;
FIG. 10 shows a flow chart of a method of establishing a reference coordinate system based on target location points;
FIG. 11 is a schematic diagram showing the control of the movement of the distal end of the instrument to a plurality of position points in succession when a preset transition relationship is determined;
FIG. 12 illustrates a schematic diagram of one embodiment of establishing a reference coordinate system;
FIG. 13 shows a schematic diagram of another embodiment of establishing a reference coordinate system;
FIG. 14 shows a schematic diagram of yet another embodiment of establishing a reference coordinate system;
FIG. 15 shows a schematic diagram of the superimposed display of virtual areas on a target image;
FIG. 16 illustrates a functional block diagram of a pose adjustment device of the image acquisition assembly provided herein;
fig. 17 shows a schematic block diagram of a controller provided in the present specification.
Detailed Description
In order to make the technical solutions in the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, based on the embodiments herein, which would be apparent to one of ordinary skill in the art without undue burden are intended to be within the scope of the present application.
In order to solve the problems that an operator needs to operate an automation device and adjust an image acquisition assembly at the same time to cause a complicated operation process and a long task execution time under the condition that the operator cannot see a real-time operation condition by naked eyes, the specification provides a method for automatically adjusting the image acquisition assembly based on an image, which determines a reference coordinate system on a real-time target image corresponding to the current pose of the image acquisition assembly, so that the view of the image acquisition assembly can be controlled relative to the reference coordinate system according to a view adjustment instruction.
The method of adjusting the image acquisition assembly provided in the present specification will be mainly described below by taking a surgical robotic system performing minimally invasive surgery as an example.
As shown in fig. 1, the surgical robot system is generally composed of a control-side apparatus 100, an execution-side apparatus 200, and an image-side apparatus 300. The control-side device 100, commonly referred to as a physician's console, is located outside the sterile field of the operating room for transmitting control instructions to the executive-side device 200. The execution end device 200, i.e., a surgical robot device (abbreviated as a surgical robot in this specification), is used to clamp a surgical instrument according to a control instruction to perform a specific surgical operation on a patient. The surgical robot may be equipped with an endoscope head or an ultrasound probe (hereinafter, an endoscope head is used as an example). The image terminal apparatus 300, which is generally called an image carriage, processes information acquired by the endoscope head to form a three-dimensional stereoscopic high-definition image, and feeds the three-dimensional stereoscopic high-definition image back to the control terminal apparatus 100 and the like.
As shown in fig. 2, the control end device 100 may also be called an operation console, a doctor console, and the doctor console is provided with a main manipulator, an imaging device, a foot pedal, and other components for generating operation instructions of the surgical robot, where the main manipulator includes a left-hand manipulation member and a right-hand manipulation member, the left-hand manipulation member is matched with the left hand of the doctor of the main knife, so that the doctor of the main knife can generate operation instructions through left-hand operation, and the right-hand manipulation member is matched with the right hand of the doctor of the main knife, so that the doctor of the main knife can generate operation instructions through right-hand operation. The foot pedal may be used to switch between two modes, an "operating instrument mode" and an "adjustment endoscope mode". As shown in fig. 3.
The imaging device provides the doctor of the main knife with the stereoscopic image in the patient detected by the endoscope, and provides the doctor of the main knife with reliable image information for performing operation. When in operation, a doctor of a main knife observes the returned intracavity three-dimensional image according to the imaging equipment, and controls the mechanical arm mechanism and the surgical instrument on the surgical robot to move through the main manipulator so as to finish various operations, thereby achieving the purpose of performing the operation on a patient.
The term "operation" in the present specification includes not only a therapeutic operation such as excision and suturing of a patient's body with a medical device, but also a surgical operation such as cutting, forceps or puncture (i.e., biopsy) for taking out a lesion tissue from a patient for pathological examination, that is, a surgical operation in the present specification means a treatment of a patient's body in response to diagnosis and treatment.
In the present specification, the term "instrument" may refer to a tool or an appliance, and is not limited to a specific type of tool.
As shown in fig. 4, the pose adjustment method of the image acquisition assembly provided in the present specification includes the following steps:
s10: a target image formed from signals acquired by the image acquisition assembly in a current pose is acquired.
An image acquisition component is a component for acquiring an image. Some image acquisition components can acquire images themselves, such as an image acquisition component with a COMS image sensor built in; some image acquisition components cannot acquire images, only can acquire intermediate data, and the intermediate data needs to be processed by a processor to acquire the images, for example, an ultrasonic detector, which can acquire ultrasonic echo signals only and needs to process the ultrasonic echo signals to acquire the ultrasonic images.
The image acquisition component may be a lens of an endoscope, an ultrasound probe of an ultrasound probe, or the like.
An endoscope is a common medical instrument, and consists of a light guide beam structure and a group of lenses. For example, laparoscopes commonly used in clinic use a series of optical lenticular lenses to transfer images to an ocular lens and image them by connecting a separate camera. According to the imaging principle, endoscopes can be classified into optical mirrors (cylindrical lenses), fiberscopes, electronic mirrors, and the like.
For example, in the surgical robot system shown in fig. 1 and 2, the image acquisition assembly used may be an endoscope that extends into the cavity of the patient through an orifice in the patient, the pose of which may be adjusted by one mechanical arm of the surgical robot, or the pose of which may be adjusted by a small motor built into the head of the endoscope. The image terminal device 300 forms a high-power amplified stereoscopic image according to the signals acquired by the endoscope head, and feeds back the stereoscopic image to the doctor console.
The pose of the image acquisition component changes to enable the visual field of the corresponding image to change, namely the image acquisition component mainly refers to a component with the pose changing to enable the visual field of the image to change.
The target image is formed by signals acquired by the image acquisition component under the current pose, and the pose at each moment can be the current pose along with the time, so that the target image corresponding to the current pose is an image which changes in real time substantially, and the pose adjustment method of the image acquisition component provided by the specification is substantially as follows: a real-time adjustment method of the pose of the image acquisition component and a real-time adjustment method of the visual field of the target image.
S20: a reference coordinate system is determined on the target image.
The image acquisition component may relate to a variety of coordinate systems, e.g., a spatial pose coordinate system of the image acquisition component, an image coordinate system of the image acquisition component. Fig. 5 shows a schematic view of a spatial pose coordinate system of the image acquisition assembly, where a is an object to be operated (e.g., may be a target tissue, a target part, etc.), B is an operation tool (e.g., a surgical instrument), and abc coordinate system is a spatial pose coordinate system of the image acquisition assembly for adjusting the pose of the image acquisition assembly. The dashed line indicated by O is the axis of view of the image acquisition assembly. O' is the projection of the view axis onto the image. Fig. 6 shows an image acquired by the image acquisition component in the pose shown in fig. 5, and as can be seen in conjunction with fig. 5 and fig. 6, the xyz coordinate system is an image coordinate system, that is, a coordinate system established by taking a projection point of a view axis in the image as an origin of coordinates, and is used for positioning the position of the operated object in the image.
The reference coordinate system described in step S20 is neither the spatial pose coordinate system shown in fig. 5 nor the image coordinate system shown in fig. 6. First, the reference coordinate system is built on the image, and not the space near the image acquisition assembly, similar to the image coordinate system shown in FIG. 6. However, the image coordinate system shown in fig. 6 uses the projection point of the view axis on the image as the origin of coordinates, and is fixed, while the origin of coordinates of the reference coordinate system described in step S20 may be set in the center of the image (i.e. the projection point of the view axis on the image), or may be set in any position on the image instead of the center of the image, and the position of the origin of coordinates may be selected by the operator according to actual needs. In addition, the xo' y plane of the image coordinate system shown in fig. 6 is perpendicular to the view axis, the z axis is parallel to the view axis, and the directions of any axes of the reference coordinate system described in step S20 are all selectable by the operator according to actual needs.
Furthermore, the image coordinate system shown in fig. 6 is used to determine the position of the object in the image, and the reference coordinate system in step S20 is used to adjust the content observed by the image capturing component, such as translation, selection, etc., relative to the reference coordinate system.
In some embodiments, a plurality of reference coordinate systems with different coordinate axis directions may be preset, and when the reference coordinate system is determined on the target image, one of the preset reference coordinate systems may be selected by the operator and placed at a specified position on the target image.
The reference coordinate system may or may not be displayed superimposed on the target image.
In some embodiments, a plurality of selectable reference coordinate systems are not preset, in which case a target position point may be selected on the target image by the operator, and then the reference coordinate system may be automatically established according to the target position point selected by the operator.
That is, as shown in fig. 7, step S20 further includes:
s21: and acquiring a target position point on the target image.
S22: and establishing a reference coordinate system according to the target position point.
In some embodiments, the target location point may be determined by an operator on a target image displayed on a display screen. Then, step S21 may be: when the target image is displayed on the display screen, acquiring the position determined by the operator on the displayed target image; the position determined on the displayed target image is determined as a target position point on the target image.
The mode of determining the target position point on the displayed target image can be that clicking is performed by a mouse, and the clicked position is used as the target position point; or directly clicking a point on the touch display screen under the condition that the display screen is the touch display screen, and taking the clicked position as a target position point; the form of specific interaction choices is not limited to the above-described techniques.
Fig. 8 shows a schematic diagram of determining a target position point based on a target image displayed on a display screen, wherein 1 is the display screen, 2 is an operated object, 3 is the target position point, and 4 is a selection tool (e.g., a mouse cursor, a stylus, etc.).
In some embodiments, the target location point may be determined by an operator controlling the instrument tip motion.
For example, the target location point may be a location point where the laser set on the distal end of the instrument is directed by the operator to emit laser light toward the target location point in the current field of view of the image acquisition assembly. Then, step S21 may be: acquiring the position of a laser point in a target image, and taking the position of the laser point as a target position point; wherein the position of the laser spot is obtained by: acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment; and (3) keeping the pose of the image acquisition component unchanged, controlling a laser at the tail end of the target instrument to emit laser to the target position at the current moment by referring to the first image, and controlling the image acquisition component to acquire signals.
The first image corresponding to the last moment is presented to an operator, so that the operator can conveniently determine the target position of laser emission, the position of the image acquisition component is kept unchanged, the 'target image corresponding to the current moment' is enabled to be not displaced relative to the 'target image corresponding to the last moment', and the difference is only whether a laser point exists on the image or not. The laser point in the present specification shall refer to a spot on a target object when laser light irradiates the target object, and when the laser light path is clearly distinguishable, the laser point refers to a distal end point of the laser light path.
Fig. 9 shows a schematic diagram of determining a target position point by emitting laser light. Wherein 1 is a display screen, 2 is an operated object, 3 is a target instrument provided with a laser, and 4 is a laser point, namely a target position point.
The "current time" and "last time" as used herein may refer to adjacent acquisition times or may not be adjacent acquisition times. The "current time" and the "last time" are only used to indicate the sequence of actions at two times, and the time interval between the current time and the last time is not limited.
In some embodiments, where the instrument tip is not provided with a laser, the target location point may be determined by controlling the moving instrument tip. Then, step S21 may be: acquiring a target position positioning instruction; responding to a target position positioning instruction, acquiring the position of the tail end of the target instrument in the target image, and taking the position of the tail end of the target instrument as a target position point; wherein the position of the target instrument tip is obtained by: acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment; and (3) keeping the pose of the image acquisition assembly unchanged, controlling the tail end of the target instrument to move to the target position at the current moment by referring to the first image, and controlling the image acquisition assembly to acquire signals.
The first image corresponding to the last moment is presented to the operator, so that the operator can conveniently determine the target position to which the tail end of the instrument needs to be moved, the position of the image acquisition assembly is kept unchanged, the 'target image corresponding to the current moment' is enabled to be not displaced relative to the 'target image corresponding to the last moment', and the difference is only that whether the tail end of the instrument moves to the target position or not.
And the target position positioning instruction is used for indicating the current position of the target instrument as a target position point. The target position location instruction may be generated by an operator via a manipulator. For example, the main surgeon generates the target position location instruction by pressing a switch or key on the right-hand manipulator (or left-hand manipulator).
In the process of controlling the movement of the distal end of the target instrument, the robot system usually determines the target position to be moved according to the operation command, then makes each joint component of the robot act to move the distal end of the target instrument to the target position, and in this process, calculates the actual position of the distal end of the target instrument according to the actual moving state of the joint. It follows that in robotic systems the position of the end of the target instrument is known. After the target position locating instruction is acquired, the recorded position of the tail end of the target instrument is directly read.
For example, in fig. 9, 5 indicates a target instrument in which a laser is not provided, and the position of the target instrument 5 can be acquired as a target position point.
In the process of controlling the robot, a base coordinate system is usually used, the base coordinate system can be established by taking any point on the robot as an origin, and in the process of controlling the robot to move, the base coordinate system is mostly used for positioning all parts on the robot.
In some embodiments, the location of the target location point is a coordinate in the base coordinate system, e.g., the actual location of the distal end of the target instrument is used as the target location point, where the target location point is a coordinate in the base coordinate system.
However, in some embodiments, after the target location point is determined, the coordinates of the target location point under the base frame are not directly available. Based on this, as shown in fig. 10, step S22 includes the steps of:
s221: and determining a first position coordinate of the target position point under a target object coordinate system, wherein the target object coordinate system is a coordinate system established by taking a preset point on the target object as a coordinate origin.
In some embodiments, it is easier to determine the first coordinates of the target location point in the target object coordinate system. For example, as shown in fig. 8, when the operator determines the target position point on the target image displayed on the display screen, only the coordinates of the target position point in the image acquisition component coordinate system, or the coordinates with respect to a certain position in the image, can be determined; as shown in fig. 9, when the laser beam is emitted from the laser provided at the distal end of the target instrument to determine the target position point, although the position and the emission direction of the laser beam are determined, the position of the spot formed when the laser beam irradiates the target object cannot be directly determined by the movement control data of the robot, and therefore, it is necessary to determine the coordinates of the target position point from the target image, and the coordinates determined from the target image are only the coordinates under the image acquisition unit or the coordinates with respect to a certain position in the image.
In some embodiments, the target object coordinate system may be the spatial pose coordinate system shown in fig. 5, i.e., the image acquisition component coordinate system. Of course, the coordinate system of the image acquisition assembly may be established at other locations in the vicinity of the image acquisition assembly as desired in addition to the coordinate system of the image acquisition assembly established at the locations shown in fig. 5.
In some embodiments, the target object coordinate system may be a coordinate system of the end of the target instrument, which is a coordinate system established with a laser (or an exit end of the laser) disposed at the end of the target instrument as a coordinate origin.
In some embodiments, step S221 may be: acquiring pixel coordinates of a target position point in a target image; according to a preset conversion relation, converting the pixel coordinates into first position coordinates of the target position points under the coordinate system of the image acquisition component; the preset conversion relation is used for converting the pixel length in the target image into the actual length under the base standard system.
Assume thatRepresenting pixel coordinates, T pixel If the conversion matrix corresponding to the preset conversion relation is the conversion matrixWherein->Representing the first coordinate of the target position point C in the target object coordinate system, the product operation of the representation matrix. Conversion matrix T pixel May be predetermined.
Specifically, the preset conversion relationship may be acquired in the following manner: keeping the pose of the image acquisition assembly unchanged, controlling the tail end of the instrument to sequentially move to a plurality of position points in the visual field of the image acquisition assembly, and recording the basic coordinates of each position point under the base standard system; determining pixel coordinates of each of a plurality of location points in the target image; and determining the conversion relation between the pixels of the target image and the actual length under the base coordinate system according to the pixel coordinates of each position point in the target image and the base coordinates under the base coordinate system.
As shown in fig. 11, the end of the instrument can be controlled to move to 9 position points in sequence under the coordinate system of the image acquisition component, the basic coordinate of each position point under the basic coordinate system is recorded, and the pixel coordinate of each position point in the target image needs to be determined, so that each position point has two coordinates, namely the basic coordinate and the pixel coordinate, and a mapping relationship exists between the two coordinates, and the mapping relationship, namely the conversion relationship, between the pixel coordinate of any position point and the basic coordinate can be determined according to 9 groups of mapping relationships corresponding to the 9 position points.
S222: a second position coordinate of the predetermined point on the target object in the base coordinate system is determined.
The image acquisition component is arranged at the tail end of one mechanical arm of the robot, so that the pose adjustment of the image acquisition component can involve the adjustment of the joints of the mechanical arm, and therefore, the second coordinate of the image acquisition component under the basic coordinate system can be directly determined through the motion control data of the robot.
S223: and determining the position of the target position point under the base coordinate system according to the first position coordinate and the second position coordinate.
Since the pose adjustment of the image acquisition component is adjusted in the pose under the base coordinate system, it is necessary to determine the position coordinates of the target position point (particularly, the target origin) under the base coordinate system.
Assume thatRepresenting the first position coordinate of the target position point C in the target object coordinate system, +.>For the purpose ofA second position coordinate of the predetermined point on the target object in the base coordinate system is +.>Wherein (1)>The position coordinates of the target position point C in the base coordinate system are represented.
S224: and establishing a reference coordinate system according to the position of the target position point under the base coordinate system.
In some embodiments, only one target position point may be determined, and then the reference coordinate system may be established with the target position point as an origin, and the direction of the coordinate axis of the reference coordinate system may be a coordinate axis direction of a coordinate system set in advance, or a coordinate axis direction of the coordinate system of the image capturing component may be a coordinate axis direction of the reference coordinate system.
As shown in fig. 12, O is a target position point, the dashed lines indicated by x0 respectively indicate the x-axis direction of the image acquisition component coordinate system, the dashed lines indicated by y0 respectively indicate the y-axis direction of the image acquisition component coordinate system, and the xyz coordinate system is an established reference coordinate system.
In some embodiments, only two target position points may be determined, and the reference coordinate system may be established with the first target position point as the origin of the reference coordinate system, the direction of the second target position point relative to the first target position point as the x-axis direction of the reference coordinate system, the z-axis direction of the image acquisition component coordinate system, or the z-axis direction of the coordinate axes of the preset coordinate system as the z-axis direction of the reference coordinate system.
As shown in FIG. 13, O 1 For the first target position point, O 2 For the second target position point, the dashed lines indicated by x0 respectively indicate the x-axis direction of the image acquisition component coordinate system, and the dashed lines indicated by y0 respectively indicate the y-axis direction of the image acquisition component coordinate system, O 2 The upward arrow at indicates the z-axis direction, x, of the image acquisition assembly coordinate system 1 y 1 z 1 The coordinate system is an established reference coordinate system.
In some embodiments, three target position points may be determined, and then the reference coordinate system may be established with the first target position point as the origin of the reference coordinate system, the direction of the second target position point relative to the first target position point as the x-axis direction of the reference coordinate system, and the normal direction of the plane in which the first target position point, the second target position point, and the third target position point are located as the z-axis direction of the reference coordinate system.
As shown in FIG. 14, O 1 For the first target position point, O 2 For the second target position point, O 3 For the third target position point, the dashed lines indicated by x0 respectively represent the x-axis direction of the image acquisition component coordinate system, the dashed lines indicated by y0 respectively represent the y-axis direction of the image acquisition component coordinate system, O 3 The upward arrow at this point represents the first target position point O 1 Second target position point O 2 Third target position point O 3 Normal direction, x of plane formed 1 y 1 z 1 The coordinate system is an established reference coordinate system.
S30: and acquiring a visual field adjusting instruction, wherein the visual field adjusting instruction is used for controlling the visual field of the image acquisition component to be adjusted relative to a reference coordinate system.
The visual field adjustment instruction may be generated in the operation tool control mode, for example, may be generated by an operator performing a predetermined operation on the manipulator in the operation tool control mode, the predetermined operation being different from the control operation of the operation tool. Of course, the view adjustment instruction may be generated in the image acquisition component control mode.
It should be noted that the field of view adjustment instruction is for controlling the field of view of the image capturing assembly, and is not for adjusting the pose of the image capturing assembly itself.
The adjustment of the field of view of the image acquisition assembly relative to the reference coordinate system has a correspondence to the adjustment of the pose of the image acquisition assembly, but may not be a fixed mapping. The adjustment of the pose of the image acquisition component involves the orientation of the image acquisition component in space relative to other target objects, and the direct control of the adjustment of the image acquisition component is complex. The pose adjusting method of the image acquisition assembly provided by the specification presents the reference coordinate system on the target image for an operator, so that the operator only needs to give an adjusting instruction of the target image relative to the reference coordinate system, and the operation is more visual, simple and convenient.
In some embodiments, S30 may be: a field of view translation or rotation command sent by an operator through a manipulator on a console is obtained.
For example, a translation command of the field of view may be generated by moving the left-hand manipulator or the right-hand manipulator, the translation direction being the same as the movement direction of the manipulator; the command to rotate about the x-axis may be generated by pressing a laser key on the left hand manipulator and the command to rotate about the y-axis may be generated by following a laser key on the right hand manipulator.
In some embodiments, S30 may be: displaying a target image, and displaying at least one virtual area in the upper, lower, left and right directions of the target image; when the tail end of the target instrument in the target image moves into the virtual area, a translation instruction corresponding to the position of the virtual area is generated.
As shown in fig. 15, which is a schematic diagram of displaying virtual areas superimposed on a target image displayed on a display screen, a virtual area is displayed on each of the upper, lower, left and right sides of the target image, and when the target instrument moves to the right virtual area, an instruction is generated to shift the field of view of the image acquisition assembly rightward with respect to the reference coordinate system. The coordinate system shown in the figure is the reference coordinate system.
The visual field adjusting instruction is determined by whether the image of the target instrument is positioned in the virtual area on the display screen, so that the image acquisition component control mode and the instrument control mode do not need to be switched back and forth, and the operation process is simplified.
S40: and determining the adjustment direction of the image acquisition component corresponding to the vision field adjustment instruction.
The visual field adjusting instruction comprises translation, rotation and the like of the visual field of the image acquisition component, the displacement relation of the point on the target object in the target image relative to the image acquisition component can be determined according to the visual field adjusting instruction, and the adjusting direction of the image acquisition component can be reversely deduced according to the displacement relation because the target object is static.
S50: and controlling and adjusting the pose of the image acquisition component according to the adjustment direction.
In some embodiments, as shown in fig. 7, the pose adjustment method of the image acquisition component further includes S60: when the pose of the image acquisition component is adjusted according to the adjustment direction of the image acquisition component, the position of the tail end of the target instrument can be automatically adjusted, so that the relative pose of the tail end of the target instrument and the image acquisition component is kept consistent.
In some embodiments, at the operator side, the control may be switched to the image acquisition assembly control mode, the instrument control mode, and then the control instruction of the robot in the corresponding control mode is given by the operator controlling the manipulator. At the operator end, the pose of the manipulator in each mode and the pose of the image acquisition component or instrument on the robot corresponding to the pose of the manipulator are also recorded. At the robot end, the pose of the image acquisition component and the pose of the instrument are recorded, and pose data of the image acquisition component and the pose data of the instrument are fed back to the operator end. In order to enable the operator end to accurately control the image acquisition component and the instrument of the robot end, the pose of the manipulator recorded by the operator end needs to be ensured to be corresponding and consistent with the pose of the image acquisition component and the instrument, namely, the master-slave consistency is maintained.
Based on this, as shown in fig. 7, the pose adjustment method of the image acquisition assembly further includes S70: when the position of the tail end of the target instrument is automatically adjusted so that the relative pose of the tail end of the target instrument and the image acquisition assembly is kept consistent, the pose of the manipulator corresponding to the tail end of the target instrument recorded in the operation console is adjusted in real time, so that the pose of the manipulator is kept consistent with the pose of the tail end of the target instrument, and the master-slave consistency is realized. The operator console, i.e. the control end device 100 shown in fig. 1, may be, for example, a doctor console.
According to the pose adjusting method of the image acquisition assembly, a reference coordinate system is determined on a target image formed according to signals acquired by the image acquisition assembly, after a vision adjusting instruction is acquired, the adjusting direction of the image acquisition assembly corresponding to the vision adjusting instruction is determined, and then the pose of the image acquisition assembly is controlled and adjusted according to the adjusting direction. According to the method and the device, the pose adjustment of the image acquisition component can be realized in the control mode of the operation tool, the pose adjustment mode of the image acquisition component is enriched, the pose of the image acquisition component can be quickly adjusted, and the overall operation efficiency is improved.
The present disclosure provides a pose adjustment device for an image acquisition assembly, which may be used to implement the pose adjustment method for an image acquisition assembly described above. As shown in fig. 16, the apparatus includes a first acquisition unit 10, a first determination unit 20, a second acquisition unit 30, a second determination unit 40, and a control unit 50.
The first acquisition unit 10 is used for acquiring a target image formed from signals acquired by the image acquisition component in the current pose. The first determining unit 20 is arranged for determining a reference coordinate system on the target image. The second acquisition unit 30 is configured to acquire a view adjustment instruction for controlling adjustment of a view of the image acquisition assembly with respect to the reference coordinate system. The second determining unit 40 is configured to determine an adjustment direction of the image capturing component corresponding to the view adjustment instruction. The control unit 50 is configured to control and adjust the pose of the image acquisition component according to the adjustment direction.
In some embodiments, the first determining unit comprises: a first obtaining subunit, configured to obtain a target location point on the target image; and the first establishing subunit is used for establishing a reference coordinate system according to the target position point.
In some embodiments, the first acquisition subunit comprises: a second obtaining subunit, configured to obtain a position of a laser point in the target image, and take the position of the laser point as a target position point; wherein the position of the laser spot is obtained by: acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment; and controlling the laser at the tail end of the target instrument to emit laser to the target position at the current moment by referring to the first image, and controlling the image acquisition assembly to acquire signals.
In some embodiments, the first acquisition subunit comprises: the third acquisition subunit is used for acquiring a target position positioning instruction; a fourth obtaining subunit, configured to obtain, in response to the target position positioning instruction, a position of a distal end of the target instrument in the target image, and take the position of the distal end of the target instrument as a target position point; wherein the position of the target instrument tip is obtained by: acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment; and controlling the tail end of the target instrument to move to the target position at the current moment by referring to the first image, and controlling the image acquisition assembly to acquire signals.
In some embodiments, the first acquisition subunit comprises: and a fifth acquisition subunit for acquiring, when the target image is displayed on the display screen, a position determined by the operator on the displayed target image, and determining the position determined on the displayed target image as a target position point on the target image.
In some embodiments, the first setup subunit comprises: a first determining subunit, configured to determine a first position coordinate of the target position point in the target object coordinate system; the target object coordinate system is a coordinate system established by taking a preset point on the target object as a coordinate origin; a second determination subunit configured to determine a second position coordinate of a predetermined point on the target object in the base coordinate system; a third determining subunit, configured to determine, according to the first position coordinate and the second position coordinate, a position of the target position point in a base coordinate system; and the second establishing subunit is used for establishing a reference coordinate system according to the position of the target position point under the base coordinate system.
In some embodiments, the first determining subunit comprises: a sixth acquisition subunit, configured to acquire pixel coordinates of a target location point in the target image; wherein the pixel coordinates are coordinates determined with one pixel as one length unit; the conversion subunit is used for converting the pixel coordinates into first position coordinates of the target position point under the coordinate system of the image acquisition component according to a preset conversion relation; the preset conversion relation is used for converting the pixel length in the target image into the actual length under the base standard.
In some embodiments, the first determining subunit further comprises: the control subunit is used for keeping the pose of the image acquisition assembly unchanged, controlling the tail end of the instrument to sequentially move to a plurality of position points in the view field of the image acquisition assembly, and recording the basic coordinates of each position point under the base standard system; a fourth determination subunit configured to determine pixel coordinates of each of the plurality of location points in the target image; and a fifth determination subunit, configured to determine a conversion relationship between the pixel of the target image and the actual length in the base coordinate system according to the pixel coordinate of each position point in the target image and the base coordinate in the base coordinate system.
In some embodiments, the first setup subunit comprises: a third establishing subunit, configured to establish a reference coordinate system by using a target position point as an origin of the reference coordinate system and using a coordinate axis direction of the image acquisition component coordinate system as a coordinate axis direction of the reference coordinate system; or, a fourth establishing subunit, configured to establish a reference coordinate system with the first target position point as an origin of the reference coordinate system, with a direction of the second target position point relative to the first target position point as an x-axis direction of the reference coordinate system, and with a z-axis direction of the image acquisition component coordinate system as a z-axis direction of the reference coordinate system; or the fifth establishing subunit is configured to establish the reference coordinate system with the first target position point as an origin of the reference coordinate system, with a direction of the second target position point relative to the first target position point as an x-axis direction of the reference coordinate system, and with a normal direction of a plane in which the first target position point, the second target position point, and the third target position point are located as a z-axis direction of the reference coordinate system.
In some embodiments, the second acquisition unit comprises: a seventh acquisition subunit for acquiring a visual field translation or rotation instruction sent by an operator through a manipulator on the operation console; or, a display subunit, configured to display the target image, and display at least one virtual area in an upper, lower, left, and right direction of the target image; and the instruction generation subunit is used for generating a translation instruction corresponding to the virtual area when the tail end of the target instrument in the target image moves into the virtual area.
In some embodiments, the apparatus further comprises: and the first adjusting unit is used for automatically adjusting the position of the tail end of the target instrument when the pose of the image acquisition assembly is adjusted according to the adjusting direction of the image acquisition assembly, so that the relative pose of the tail end of the target instrument and the image acquisition assembly is kept consistent.
In some embodiments, further comprising: and the second adjusting unit is used for adjusting the manipulator pose corresponding to the target instrument end recorded in the operation console in real time when the position of the target instrument end is automatically adjusted so that the relative pose of the target instrument end and the image acquisition assembly is consistent, so that the pose of the manipulator is consistent with the pose of the target instrument end.
The specific details of the pose adjusting device of the image capturing assembly can be understood with reference to the related descriptions and effects in the corresponding embodiments of fig. 1 to 15, and are not repeated here.
The present specification provides a robot, which may be a surgical robot as shown in fig. 1 or a robot applied to other fields. The robot includes a base, a first robotic arm, a second robotic arm, an operating tool, an image acquisition assembly, an imaging assembly, a display, and a controller.
The first mechanical arm and the second mechanical arm are arranged on the base. In some embodiments, the first mechanical arm may have a plurality of first mechanical arms, and the second mechanical arm may have a plurality of second mechanical arms.
The operating tool is arranged at the tail end of the first mechanical arm, and the first mechanical arm can drive the pose of the operating tool to change.
The image acquisition component is arranged at the tail end of the second mechanical arm and is used for acquiring an image of the operation condition of the operation tool. For example, the image acquisition component may be an endoscope, an ultrasound probe, or the like. The second mechanical arm can act to drive the pose of the image acquisition component to change.
The imaging component is used for forming an image according to the signals acquired by the image acquisition component.
The display is used for presenting an image formed by the imaging assembly.
The controller is used for acquiring a target image formed according to signals acquired by the image acquisition component under the current pose; determining a reference coordinate system on the target image; acquiring a visual field adjusting instruction, wherein the visual field adjusting instruction is used for controlling the visual field of the image acquisition component to be adjusted relative to a reference coordinate system; determining an adjustment direction of an image acquisition component corresponding to the visual field adjustment instruction; and controlling and adjusting the pose of the image acquisition component according to the adjustment direction.
Embodiments of the present invention also provide a controller, as shown in fig. 17, which may include a processor 1701 and a memory 1702, where the processor 1701 and the memory 1702 may be connected by a bus or otherwise, as exemplified by the bus connection in fig. 17.
The processor 1701 may be a central processing unit (Central Processing Unit, CPU). The processor 1701 may also be a chip such as another general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or a combination thereof.
The memory 1702 is a non-transitory computer readable storage medium that may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules (e.g., the first acquisition unit 10, the first determination unit 20, the second acquisition unit 30, the second determination unit 40, and the control unit 50 in fig. 16) corresponding to the pose adjustment method of the image acquisition assembly in the embodiments of the present invention. The processor 1701 executes various functional applications of the processor and data classification by running non-transitory software programs, instructions, and modules stored in the memory 1702, i.e., implements the pose adjustment method of the image acquisition component in the method embodiments described above.
Memory 1702 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created by the processor 1701, and the like. In addition, memory 1702 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1702 may optionally include memory located remotely from the processor 1701, such remote memory being connectable to the processor 1701 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 1702, which when executed by the processor 1701, performs the pose adjustment method of the image acquisition assembly in the embodiment as shown in fig. 1-14.
The details of the controller may be understood by referring to the related descriptions and effects in the corresponding embodiments of fig. 1 to 15, which are not repeated here.
The present description also provides a computer storage medium having stored thereon computer program instructions which when executed perform the steps of the corresponding embodiments of fig. 1 to 15.
It will be appreciated by those skilled in the art that implementing all or part of the above-described embodiment method may be implemented by a computer program to instruct related hardware, where the program may be stored in a computer readable storage medium, and the program may include the above-described embodiment method when executed. Wherein the storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a hardware+program class embodiment, the description is relatively simple, as it is substantially similar to the method embodiment, as relevant see the partial description of the method embodiment.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller can be regarded as a hardware component, and means for implementing various functions included therein can also be regarded as a structure within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The foregoing is merely an example of an embodiment of the present disclosure and is not intended to limit the embodiment of the present disclosure. Various modifications and variations of the illustrative embodiments will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of the embodiments of the present specification, should be included in the scope of the claims of the embodiments of the present specification.

Claims (16)

1. A method for adjusting the pose of an image acquisition assembly, comprising:
acquiring a target image formed according to signals acquired by an image acquisition component under the current pose;
determining a reference coordinate system on the target image;
acquiring a visual field adjustment instruction, wherein the visual field adjustment instruction is used for controlling the visual field of the image acquisition component to be adjusted relative to the reference coordinate system;
determining an adjustment direction of an image acquisition component corresponding to the visual field adjustment instruction;
and controlling and adjusting the pose of the image acquisition component according to the adjustment direction.
2. The method of claim 1, wherein determining a reference coordinate system on the target image comprises:
acquiring a target position point on the target image;
and establishing a reference coordinate system according to the target position point.
3. The method of claim 2, wherein acquiring the target location point on the target image comprises:
acquiring the position of a laser point in the target image, and taking the position of the laser point as a target position point; wherein the position of the laser spot is obtained by:
acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment;
and controlling the laser at the tail end of the target instrument to emit laser to the target position at the current moment by referring to the first image, and controlling the image acquisition assembly to acquire signals.
4. The method of claim 2, wherein acquiring the target location point on the target image comprises:
acquiring a target position positioning instruction;
responding to the target position positioning instruction, acquiring the position of the tail end of the target instrument in the target image, and taking the position of the tail end of the target instrument as a target position point; wherein the position of the target instrument tip is obtained by:
acquiring a first image formed according to a signal acquired by an image acquisition component under the pose of the last moment;
And controlling the tail end of the target instrument to move to the target position at the current moment by referring to the first image, and controlling the image acquisition assembly to acquire signals.
5. The method of claim 2, wherein acquiring the target location point on the target image comprises:
when the target image is displayed on the display screen, acquiring the position determined by the operator on the displayed target image;
the position determined on the displayed target image is determined as a target position point on the target image.
6. The method according to claim 3 or 4, wherein establishing a reference coordinate system from the target location point comprises:
determining a first position coordinate of a target position point under a target object coordinate system; the target object coordinate system is a coordinate system established by taking a preset point on the target object as a coordinate origin;
determining a second position coordinate of a predetermined point on the target object under the base coordinate system;
determining the position of the target position point under a base coordinate system according to the first position coordinate and the second position coordinate;
and establishing a reference coordinate system according to the position of the target position point under the base coordinate system.
7. The method of claim 6, wherein determining a first position coordinate of the target position point in the target object coordinate system comprises:
acquiring pixel coordinates of a target position point in the target image; wherein the pixel coordinates are coordinates determined with one pixel as one length unit;
converting the pixel coordinates into first position coordinates of the target position point under an image acquisition component coordinate system according to a preset conversion relation; the preset conversion relation is used for converting the pixel length in the target image into the actual length under the base standard.
8. The method of claim 7, further comprising, prior to converting the pixel coordinates into first position coordinates of the target position point in the image capturing assembly coordinate system according to a preset conversion relationship: the preset conversion relation is obtained according to the following mode:
keeping the pose of the image acquisition assembly unchanged, controlling the tail end of the instrument to sequentially move to a plurality of position points in the visual field of the image acquisition assembly, and recording the basic coordinates of each position point under the base standard system;
determining pixel coordinates of each of a plurality of location points in the target image;
And determining the conversion relation between the pixels of the target image and the actual length under the base coordinate system according to the pixel coordinates of each position point in the target image and the base coordinates under the base coordinate system.
9. The method of claim 2, wherein establishing a reference coordinate system from the target location point comprises:
establishing a reference coordinate system by taking a target position point as an origin of the reference coordinate system and taking the coordinate axis direction of the image acquisition component coordinate system as the coordinate axis direction of the reference coordinate system;
or,
establishing a reference coordinate system by taking a first target position point as an origin of the reference coordinate system, taking the direction of a second target position point relative to the first target position point as the x-axis direction of the reference coordinate system, and taking the z-axis direction of the image acquisition component coordinate system as the z-axis direction of the reference coordinate system;
or,
and establishing a reference coordinate system by taking the first target position point as an origin of the reference coordinate system, taking the direction of the second target position point relative to the first target position point as the x-axis direction of the reference coordinate system, and taking the normal directions of planes of the first target position point, the second target position point and the third target position point as the z-axis direction of the reference coordinate system.
10. The method of claim 1, wherein obtaining a view adjustment instruction comprises:
acquiring a visual field translation or rotation instruction sent by an operator through a manipulator on an operation console;
or,
displaying the target image, and displaying at least one virtual area in the upper, lower, left and right directions of the target image;
when the tail end of the target instrument in the target image moves into the virtual area, a translation instruction corresponding to the position of the virtual area is generated.
11. The method as recited in claim 1, further comprising:
when the pose of the image acquisition component is adjusted according to the adjustment direction of the image acquisition component, the position of the tail end of the target instrument is automatically adjusted, so that the relative pose of the tail end of the target instrument and the image acquisition component is kept consistent.
12. The method of claim 11, wherein when automatically adjusting the position of the target instrument tip such that the target instrument tip is consistent with the relative pose of the image acquisition assembly, further comprising:
and adjusting the position and the posture of the manipulator corresponding to the tail end of the target instrument recorded in the operation console in real time so as to keep the position and the posture of the manipulator consistent with the position and the posture of the tail end of the target instrument.
13. A pose adjustment device of an image acquisition assembly, comprising:
the first acquisition unit is used for acquiring a target image formed according to signals acquired by the image acquisition component under the current pose;
a first determining unit configured to determine a reference coordinate system on the target image;
a second acquisition unit configured to acquire a field-of-view adjustment instruction for controlling adjustment of a field of view of the image acquisition component with respect to the reference coordinate system;
a second determining unit, configured to determine an adjustment direction of the image acquisition component corresponding to the field adjustment instruction;
and the control unit is used for controlling and adjusting the pose of the image acquisition component according to the adjustment direction.
14. A robot, comprising:
a base;
the first mechanical arm and the second mechanical arm are arranged on the base;
the operation tool is arranged at the tail end of the first mechanical arm, and the first mechanical arm can act to drive the pose of the operation tool to change;
the image acquisition assembly is arranged at the tail end of the second mechanical arm and is used for acquiring an image of the operation condition of the operation tool, and the second mechanical arm can drive the pose of the image acquisition assembly to change;
An imaging assembly for forming an image from the signals acquired by the image acquisition assembly;
a display for presenting an image formed by the imaging assembly;
the controller is used for acquiring a target image formed according to the signals acquired by the image acquisition component under the current pose; determining a reference coordinate system on the target image; acquiring a visual field adjustment instruction, wherein the visual field adjustment instruction is used for controlling the visual field of the image acquisition component to be adjusted relative to the reference coordinate system; determining an adjustment direction of an image acquisition component corresponding to the visual field adjustment instruction; and controlling and adjusting the pose of the image acquisition component according to the adjustment direction.
15. A controller, comprising:
a memory and a processor in communication with each other, the memory having stored therein computer instructions which, upon execution, cause the processor to perform the steps of the method of any of claims 1 to 12.
16. A computer storage medium storing computer program instructions which, when executed, implement the steps of the method of any one of claims 1 to 12.
CN202210766701.0A 2022-07-01 2022-07-01 Pose adjusting method and device of image acquisition assembly and controller Pending CN117372667A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210766701.0A CN117372667A (en) 2022-07-01 2022-07-01 Pose adjusting method and device of image acquisition assembly and controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210766701.0A CN117372667A (en) 2022-07-01 2022-07-01 Pose adjusting method and device of image acquisition assembly and controller

Publications (1)

Publication Number Publication Date
CN117372667A true CN117372667A (en) 2024-01-09

Family

ID=89402755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210766701.0A Pending CN117372667A (en) 2022-07-01 2022-07-01 Pose adjusting method and device of image acquisition assembly and controller

Country Status (1)

Country Link
CN (1) CN117372667A (en)

Similar Documents

Publication Publication Date Title
US10282881B2 (en) Rendering tool information as graphic overlays on displayed images of tools
CN108472095B (en) System, controller and method for robotic surgery using virtual reality devices
CN107049492B (en) Surgical robot system and method for displaying position of surgical instrument
KR101320379B1 (en) Auxiliary image display and manipulation on a computer display in a medical robotic system
JP5372225B2 (en) Tool position and identification indicator displayed in the border area of the computer display screen
US8734431B2 (en) Remote control system
JP3540362B2 (en) Surgical manipulator control system and control method
JP4152402B2 (en) Surgery support device
US11801103B2 (en) Surgical system and method of controlling surgical system
KR20140112207A (en) Augmented reality imaging display system and surgical robot system comprising the same
KR20140115575A (en) Surgical robot system and method for controlling the same
KR20190099280A (en) Surgical support device, control method thereof, recording medium and surgical support system
JP2021531910A (en) Robot-operated surgical instrument location tracking system and method
CN110464468B (en) Surgical robot and control method and control device for tail end instrument of surgical robot
JP7334499B2 (en) Surgery support system, control device and control method
CN115500950A (en) Endoscope pose adjusting method, surgical robot, and storage medium
US20220322919A1 (en) Medical support arm and medical system
JP7494196B2 (en) SYSTEM AND METHOD FOR FACILITATING OPTIMIZATION OF IMAGING DEVICE VIEWPOINT DURING A SURGERY SESSION OF A COMPUTER-ASSISTED SURGERY SYSTEM - Patent application
US20220211270A1 (en) Systems and methods for generating workspace volumes and identifying reachable workspaces of surgical instruments
CN114631886A (en) Mechanical arm positioning method, readable storage medium and surgical robot system
WO2023040817A1 (en) Control method of surgeon console, surgeon console, robot system, and medium
CN117372667A (en) Pose adjusting method and device of image acquisition assembly and controller
RU2721461C1 (en) Method of controlling a camera in a robot-surgical system
JP2002017751A (en) Surgery navigation device
US20230139425A1 (en) Systems and methods for optimizing configurations of a computer-assisted surgical system for reachability of target objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination