CN117860382A - Navigation surgery mechanical arm vision servo pose prediction PD control method based on LSTM - Google Patents

Navigation surgery mechanical arm vision servo pose prediction PD control method based on LSTM Download PDF

Info

Publication number
CN117860382A
CN117860382A CN202410002423.0A CN202410002423A CN117860382A CN 117860382 A CN117860382 A CN 117860382A CN 202410002423 A CN202410002423 A CN 202410002423A CN 117860382 A CN117860382 A CN 117860382A
Authority
CN
China
Prior art keywords
mechanical arm
pose
lstm
prediction
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410002423.0A
Other languages
Chinese (zh)
Other versions
CN117860382B (en
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwood Valley Medtech Co Ltd
Original Assignee
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwood Valley Medtech Co Ltd filed Critical Longwood Valley Medtech Co Ltd
Priority to CN202410002423.0A priority Critical patent/CN117860382B/en
Publication of CN117860382A publication Critical patent/CN117860382A/en
Application granted granted Critical
Publication of CN117860382B publication Critical patent/CN117860382B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B11/00Automatic controllers
    • G05B11/01Automatic controllers electric
    • G05B11/36Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
    • G05B11/42Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential for obtaining a characteristic which is both proportional and time-dependent, e.g. P. I., P. I. D.
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2074Interface software
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Robotics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The application provides a navigation surgery mechanical arm visual servo pose prediction PD control method, device and equipment based on LSTM and a computer readable storage medium. The method comprises the following steps: establishing a navigation model; collecting focus target zone rotation translation data offline; inputting focus target area data into LSTM for model training to obtain a prediction model; inputting the data of the on-line focus target area into a prediction model to generate a prediction result at the next moment; inputting a predicted result of the next moment into a navigation model to generate a target pose of the mechanical arm at the next moment; the current target pose of the mechanical arm and the next target pose of the mechanical arm are input into the PD controller together, and the control quantity of the tail end pose of the mechanical arm is output; and controlling the mechanical arm to move along with the target area based on the pose control quantity of the tail end of the mechanical arm. According to the embodiment of the application, the response speed of dynamic following of the mechanical arm can be improved, the following time delay and errors are reduced, and the accuracy and stability of the puncture task are effectively improved.

Description

Navigation surgery mechanical arm vision servo pose prediction PD control method based on LSTM
Technical Field
The application belongs to the technical field of mechanical arm control, and particularly relates to a navigation surgery mechanical arm visual servo pose prediction PD control method, device and equipment based on LSTM and a computer readable storage medium.
Background
In a navigation surgical robot, when a positioning puncture task is performed, visual markers are arranged at the focus and the tail end of a mechanical arm, navigation of an image space is completed by using a visual sensor, positioning of a target area by a mechanical arm tool is completed by using a visual servo relation before the puncture task starts, and dynamic following of the target area by the tool is controlled by the mechanical arm continuously depending on the visual servo relation and a control algorithm after the puncture task starts; the accuracy of the puncturing task is greatly affected by the effect of dynamic following.
Therefore, how to improve the response speed of dynamic following of the mechanical arm, reduce the following time delay and the following error, and effectively improve the accuracy and the stability of the puncture task is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The application embodiment provides a navigation surgery mechanical arm vision servo pose prediction PD control method, device and equipment based on LSTM and a computer readable storage medium, which can improve the response speed of dynamic following of the mechanical arm, reduce the following time delay and error and effectively improve the accuracy and stability of a puncture task.
In a first aspect, an embodiment of the present application provides a method for controlling a visual servo pose prediction PD of a navigation surgical mechanical arm based on LSTM, including:
establishing a navigation model;
collecting focus target zone rotation translation data offline;
inputting focus target area data into LSTM for model training to obtain a prediction model;
inputting the data of the on-line focus target area into a prediction model to generate a prediction result at the next moment;
inputting a predicted result of the next moment into a navigation model to generate a target pose of the mechanical arm at the next moment;
the current target pose of the mechanical arm and the next target pose of the mechanical arm are input into the PD controller together, and the control quantity of the tail end pose of the mechanical arm is output;
and controlling the mechanical arm to move along with the target area based on the pose control quantity of the tail end of the mechanical arm.
Optionally, establishing the navigation model includes:
acquiring a relative conversion matrix of the target end flange compared with the current flange;
and establishing an error matrix for dynamically tracking the target area by the mechanical arm.
Optionally, obtaining a relative transformation matrix of the target end flange as compared to the current flange includes:
acquiring a pose conversion matrix of the target area under the vision sensor;
establishing a pose conversion matrix of the target area under a mechanical arm tool;
based on the pose conversion matrix of the target area under the mechanical arm tool, the relative conversion matrix of the target end flange compared with the current flange is obtained.
Optionally, inputting focus target area data into the LSTM for model training to obtain a prediction model, including:
the network of LSTM includes: an input layer, an intermediate hidden layer and an output layer;
wherein, the input layer nodes respectively represent the time sequence of the pose vector elements;
the middle hidden layer has two layers, and all the nodes of each layer are connected;
the output layer nodes respectively represent the pose of the next moment.
Optionally, calculating an output vector of the output layer node includes:
respectively acquiring an input state, an output state and a weight matrix of the hidden layer;
based on the input state, the output state and the weight matrix of the hidden layer, the output vector of the output layer node is calculated.
Optionally, acquiring the current target pose of the mechanical arm includes:
calculating the expected position of the mechanical arm carrying the tool to the target focus target area;
acquiring the current position increment;
based on the desired position and the current position increment, a current target pose of the mechanical arm is calculated.
Optionally, acquiring the target pose of the mechanical arm at the next moment includes:
inputting the pose of the focus target area at the next moment into a navigation model to obtain the flange pose of the mechanical arm;
and calculating the target pose of the mechanical arm at the next moment based on the flange pose of the mechanical arm and the current position increment.
In a second aspect, an embodiment of the present application provides a LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control apparatus, including:
the model building module is used for building a navigation model;
the data acquisition module is used for acquiring focus target zone rotation and translation data offline;
the model training module is used for inputting focus target area data into the LSTM to perform model training so as to obtain a prediction model;
the prediction result generation module is used for inputting the online focus target area data into the prediction model to generate a prediction result at the next moment;
the target pose generation module is used for inputting the predicted result of the next moment into the navigation model to generate the target pose of the mechanical arm at the next moment;
the pose control quantity output module is used for inputting the current target pose of the mechanical arm and the next target pose of the mechanical arm into the PD controller together and outputting the pose control quantity of the tail end of the mechanical arm;
and the mechanical arm control module is used for controlling the mechanical arm to move along with the target area based on the control quantity of the tail end pose of the mechanical arm.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions;
and the processor executes the computer program instructions to realize the visual servo pose prediction PD control method of the navigation surgery mechanical arm based on LSTM.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions that, when executed by a processor, implement a LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control method.
According to the PD control method, device and equipment for visual servo pose prediction of the navigation surgery mechanical arm based on the LSTM and the computer readable storage medium, the response speed of dynamic following of the mechanical arm can be improved, the following time delay and errors are reduced, and the accuracy and stability of a puncture task are effectively improved.
The navigation surgery mechanical arm visual servo pose prediction PD control method based on LSTM comprises the following steps: establishing a navigation model; collecting focus target zone rotation translation data offline; inputting focus target area data into LSTM for model training to obtain a prediction model; inputting the data of the on-line focus target area into a prediction model to generate a prediction result at the next moment; inputting a predicted result of the next moment into a navigation model to generate a target pose of the mechanical arm at the next moment; the current target pose of the mechanical arm and the next target pose of the mechanical arm are input into the PD controller together, and the control quantity of the tail end pose of the mechanical arm is output; and controlling the mechanical arm to move along with the target area based on the pose control quantity of the tail end of the mechanical arm.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described below, it will be obvious that the drawings in the description below are some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a control method for predicting PD based on visual servo pose of a navigation surgical mechanical arm based on LSTM according to one embodiment of the present application;
fig. 2 is a schematic diagram of an LSTM network structure according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an LSTM-based visual servo pose prediction PD control device for a navigational surgical mechanical arm according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application are described in detail below to make the objects, technical solutions and advantages of the present application more apparent, and to further describe the present application in conjunction with the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative of the application and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
In order to solve the problems in the prior art, the embodiment of the application provides a navigation surgery mechanical arm visual servo pose prediction PD control method, device and equipment based on LSTM and a computer readable storage medium. The following first describes a control method for predicting the PD of the visual servo pose of the navigation operation mechanical arm based on the LSTM provided by the embodiment of the application.
Fig. 1 shows a flowchart of a control method for predicting PD based on visual servo pose of a navigation surgical mechanical arm of LSTM according to an embodiment of the present application. As shown in fig. 1, the LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control method includes:
s101, establishing a navigation model;
in one embodiment, building a navigation model includes:
acquiring a relative conversion matrix of the target end flange compared with the current flange;
and establishing an error matrix for dynamically tracking the target area by the mechanical arm.
In one embodiment, obtaining a relative transformation matrix for the target end flange as compared to the current flange includes:
acquiring a pose conversion matrix of the target area under the vision sensor;
establishing a pose conversion matrix of the target area under a mechanical arm tool;
based on the pose conversion matrix of the target area under the mechanical arm tool, the relative conversion matrix of the target end flange compared with the current flange is obtained.
Specifically, the pose of the target area under the vision sensor isThe tool is the tool coordinate system. The current relationship of the robotic arm tool to the target area is: />The method comprises the steps of carrying out a first treatment on the surface of the The conversion matrix can be obtained by reading the visual marker data by a visual sensor. The relative transformation matrix to the target end flange, as compared to the current flange, is thus possible: />Wherein->The conversion matrix is described for the connection relation of the end flange of the mechanical arm and the rigidity of the end tool, and the conversion matrix is the same matrix. Thereby establishing an error matrix of the dynamic tracking of the target area by the mechanical arm>The method comprises the steps of carrying out a first treatment on the surface of the Convert it into vector form
S102, acquiring focus target region rotation and translation data offline;
s103, inputting focus target area data into an LSTM (least squares) for model training to obtain a prediction model;
in one embodiment, model training is performed by inputting focus target area data into an LSTM to obtain a prediction model, and the method comprises the following steps:
the network of LSTM includes: an input layer, an intermediate hidden layer and an output layer;
wherein, the input layer nodes respectively represent the time sequence of the pose vector elements;
the middle hidden layer has two layers, and all the nodes of each layer are connected;
the output layer nodes respectively represent the pose of the next moment.
In one embodiment, calculating an output vector of an output layer node includes:
respectively acquiring an input state, an output state and a weight matrix of the hidden layer;
based on the input state, the output state and the weight matrix of the hidden layer, the output vector of the output layer node is calculated.
Specifically, as shown in fig. 2, LSTM regression estimation of target motion:
the regression estimation of the motion pose of the focus uses LSTM (long short term memory network), the LSTM network is trained by using the data of the movement of the visual marker acquired off-line, and the on-line data is brought into to generate predicted pose data, and the network structure is as follows: the input layer nodes respectively represent the time sequence of the pose vector elements
Wherein m is the sliding width of the time sequence data window, and six nodes of the output layer respectively represent the pose of the next momentThe middle hidden layer is composed of two layers of LSTM-cells, each layer comprises 12 nodes, full connection is carried out among the nodes of each layer, and the ReLU is used as an activation function:
wherein the output vector is calculatedThe iterative calculation formula is as follows:
wherein h is the input state of the hidden layer, b y For the output vector, W is the weight matrix. The loss function is defined as follows
Representing an estimate of the current network output for the next time instant.
S104, inputting online focus target area data into a prediction model to generate a prediction result at the next moment;
s105, inputting a predicted result of the next moment into a navigation model to generate a target pose of the mechanical arm at the next moment;
s106, inputting the current target pose of the mechanical arm and the next target pose of the mechanical arm into the PD controller together, and outputting the control quantity of the pose of the tail end of the mechanical arm;
in one embodiment, obtaining the current target pose of the robotic arm includes:
calculating the expected position of the mechanical arm carrying the tool to the target focus target area;
acquiring the current position increment;
based on the desired position and the current position increment, a current target pose of the mechanical arm is calculated.
In one embodiment, acquiring the next moment target pose of the robotic arm comprises:
inputting the pose of the focus target area at the next moment into a navigation model to obtain the flange pose of the mechanical arm;
and calculating the target pose of the mechanical arm at the next moment based on the flange pose of the mechanical arm and the current position increment.
Specifically, the PD mechanical arm tracking controller based on LSTM:
wherein u is the control output of the mechanical armFor the trend robotic arm calculated according to the navigation principle carries the tool to the desired position of the target focus target zone +.>For the current position increment, ++>And carrying the predicted focal target position and pose of the next moment into the manipulator flange position and pose obtained by the navigation calculation model according to the regression of the previous m moment for the LSTM network.Is a controller parameter.
And S107, controlling the mechanical arm to move along with the target area based on the pose control quantity of the tail end of the mechanical arm.
Fig. 3 shows a schematic structural diagram of an LSTM-based navigation surgery mechanical arm visual servo pose prediction PD control device according to an embodiment of the present application. As shown in fig. 3, the LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control apparatus includes:
the model building module 301 is configured to build a navigation model;
the data acquisition module 302 is configured to acquire focus target area rotation and translation data offline;
the model training module 303 is configured to input focus target area data into the LSTM to perform model training, so as to obtain a prediction model;
the prediction result generation module 304 is configured to input the online focus target area data into a prediction model, and generate a prediction result at the next moment;
the target pose generation module 305 is configured to input a predicted result at the next time into the navigation model, and generate a target pose of the mechanical arm at the next time;
the pose control amount output module 306 is configured to input the current target pose of the mechanical arm and the target pose of the mechanical arm at the next moment into the PD controller together, and output the pose control amount of the tail end of the mechanical arm;
the mechanical arm control module 307 is configured to control the mechanical arm to move along with the target area based on the control amount of the pose of the end of the mechanical arm.
Fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device may comprise a processor 401 and a memory 402 in which computer program instructions are stored.
In particular, the processor 401 described above may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 402 may include mass storage for data or instructions. By way of example, and not limitation, memory 402 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. Memory 402 may include removable or non-removable (or fixed) media, where appropriate. The memory 402 may be internal or external to the electronic device, where appropriate. In particular embodiments, memory 402 may be a non-volatile solid state memory.
In one embodiment, memory 402 may be Read Only Memory (ROM). In one embodiment, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
The processor 401 reads and executes the computer program instructions stored in the memory 402 to implement any one of the LSTM-based navigation surgery mechanical arm visual servo pose prediction PD control methods of the above embodiments.
In one example, the electronic device may also include a communication interface 403 and a bus 410. As shown in fig. 4, the processor 401, the memory 402, and the communication interface 403 are connected by a bus 410 and perform communication with each other.
The communication interface 403 is mainly used to implement communication between each module, device, unit and/or apparatus in the embodiments of the present application.
Bus 410 includes hardware, software, or both, coupling components of the electronic device to one another. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 410 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
In addition, in combination with the LSTM-based navigation surgery mechanical arm visual servo pose prediction PD control method in the above embodiments, embodiments of the present application may provide a computer readable storage medium to implement. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement any of the LSTM-based navigation surgery mechanical arm visual servo pose prediction PD control methods of the above embodiments.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (10)

1. The utility model provides a navigation surgery mechanical arm vision servo pose prediction PD control method based on LSTM, which is characterized by comprising the following steps:
establishing a navigation model;
collecting focus target zone rotation translation data offline;
inputting focus target area data into LSTM for model training to obtain a prediction model;
inputting the data of the on-line focus target area into a prediction model to generate a prediction result at the next moment;
inputting a predicted result of the next moment into a navigation model to generate a target pose of the mechanical arm at the next moment;
the current target pose of the mechanical arm and the next target pose of the mechanical arm are input into the PD controller together, and the control quantity of the tail end pose of the mechanical arm is output;
and controlling the mechanical arm to move along with the target area based on the pose control quantity of the tail end of the mechanical arm.
2. The LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control method according to claim 1, wherein establishing a navigation model comprises:
acquiring a relative conversion matrix of the target end flange compared with the current flange;
and establishing an error matrix for dynamically tracking the target area by the mechanical arm.
3. The LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control method according to claim 2, wherein obtaining a relative transformation matrix of a target end flange compared to a current flange comprises:
acquiring a pose conversion matrix of the target area under the vision sensor;
establishing a pose conversion matrix of the target area under a mechanical arm tool;
based on the pose conversion matrix of the target area under the mechanical arm tool, the relative conversion matrix of the target end flange compared with the current flange is obtained.
4. The LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control method according to claim 1, wherein inputting focus target area data into LSTM for model training to obtain a prediction model, comprising:
the network of LSTM includes: an input layer, an intermediate hidden layer and an output layer;
wherein, the input layer nodes respectively represent the time sequence of the pose vector elements;
the middle hidden layer has two layers, and all the nodes of each layer are connected;
the output layer nodes respectively represent the pose of the next moment.
5. The LSTM based navigation surgery mechanical arm vision servo pose prediction PD control method of claim 4, wherein calculating an output vector of an output layer node comprises:
respectively acquiring an input state, an output state and a weight matrix of the hidden layer;
based on the input state, the output state and the weight matrix of the hidden layer, the output vector of the output layer node is calculated.
6. The LSTM based navigation surgery mechanical arm vision servo pose prediction PD control method according to claim 1, wherein obtaining the current target pose of the mechanical arm comprises:
calculating the expected position of the mechanical arm carrying the tool to the target focus target area;
acquiring the current position increment;
based on the desired position and the current position increment, a current target pose of the mechanical arm is calculated.
7. The LSTM based navigation surgery mechanical arm vision servo pose prediction PD control method according to claim 6, wherein obtaining a next moment target pose of the mechanical arm comprises:
inputting the pose of the focus target area at the next moment into a navigation model to obtain the flange pose of the mechanical arm;
and calculating the target pose of the mechanical arm at the next moment based on the flange pose of the mechanical arm and the current position increment.
8. An LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control apparatus, characterized in that the apparatus comprises:
the model building module is used for building a navigation model;
the data acquisition module is used for acquiring focus target zone rotation and translation data offline;
the model training module is used for inputting focus target area data into the LSTM to perform model training so as to obtain a prediction model;
the prediction result generation module is used for inputting the online focus target area data into the prediction model to generate a prediction result at the next moment;
the target pose generation module is used for inputting the predicted result of the next moment into the navigation model to generate the target pose of the mechanical arm at the next moment;
the pose control quantity output module is used for inputting the current target pose of the mechanical arm and the next target pose of the mechanical arm into the PD controller together and outputting the pose control quantity of the tail end of the mechanical arm;
and the mechanical arm control module is used for controlling the mechanical arm to move along with the target area based on the control quantity of the tail end pose of the mechanical arm.
9. An electronic device, the electronic device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control method according to any one of claims 1-7.
10. A computer readable storage medium, wherein computer program instructions are stored on the computer readable storage medium, and when the computer program instructions are executed by a processor, the computer program instructions implement the LSTM-based navigation surgery mechanical arm vision servo pose prediction PD control method according to any one of claims 1-7.
CN202410002423.0A 2024-01-02 2024-01-02 Navigation surgery mechanical arm vision servo pose prediction PD control method based on LSTM Active CN117860382B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410002423.0A CN117860382B (en) 2024-01-02 2024-01-02 Navigation surgery mechanical arm vision servo pose prediction PD control method based on LSTM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410002423.0A CN117860382B (en) 2024-01-02 2024-01-02 Navigation surgery mechanical arm vision servo pose prediction PD control method based on LSTM

Publications (2)

Publication Number Publication Date
CN117860382A true CN117860382A (en) 2024-04-12
CN117860382B CN117860382B (en) 2024-06-25

Family

ID=90583987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410002423.0A Active CN117860382B (en) 2024-01-02 2024-01-02 Navigation surgery mechanical arm vision servo pose prediction PD control method based on LSTM

Country Status (1)

Country Link
CN (1) CN117860382B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115648211A (en) * 2022-10-31 2023-01-31 杭州键嘉医疗科技股份有限公司 Method, device and equipment for compensating attitude error of mechanical arm and storage medium
WO2023082990A1 (en) * 2021-11-09 2023-05-19 极限人工智能有限公司 Method and apparatus for determining working pose of robotic arm
WO2023142353A1 (en) * 2022-01-26 2023-08-03 奥比中光科技集团股份有限公司 Pose prediction method and apparatus
CN116747026A (en) * 2023-06-05 2023-09-15 北京长木谷医疗科技股份有限公司 Intelligent robot bone cutting method, device and equipment based on deep reinforcement learning
CN116889471A (en) * 2023-07-13 2023-10-17 北京长木谷医疗科技股份有限公司 Method, device and equipment for selecting and solving optimal joint angle of navigation operation mechanical arm
CN117084788A (en) * 2022-05-11 2023-11-21 北京天智航医疗科技股份有限公司 Method and device for determining target gesture of mechanical arm and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023082990A1 (en) * 2021-11-09 2023-05-19 极限人工智能有限公司 Method and apparatus for determining working pose of robotic arm
WO2023142353A1 (en) * 2022-01-26 2023-08-03 奥比中光科技集团股份有限公司 Pose prediction method and apparatus
CN117084788A (en) * 2022-05-11 2023-11-21 北京天智航医疗科技股份有限公司 Method and device for determining target gesture of mechanical arm and storage medium
CN115648211A (en) * 2022-10-31 2023-01-31 杭州键嘉医疗科技股份有限公司 Method, device and equipment for compensating attitude error of mechanical arm and storage medium
CN116747026A (en) * 2023-06-05 2023-09-15 北京长木谷医疗科技股份有限公司 Intelligent robot bone cutting method, device and equipment based on deep reinforcement learning
CN116889471A (en) * 2023-07-13 2023-10-17 北京长木谷医疗科技股份有限公司 Method, device and equipment for selecting and solving optimal joint angle of navigation operation mechanical arm

Also Published As

Publication number Publication date
CN117860382B (en) 2024-06-25

Similar Documents

Publication Publication Date Title
CN107703756B (en) Kinetic model parameter identification method and device, computer equipment and storage medium
CN101402199A (en) Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
CN114131611B (en) Off-line compensation method, system and terminal for joint errors of robot gravity pose decomposition
CN111123947A (en) Robot traveling control method and device, electronic device, medium, and robot
CN109159112A (en) A kind of robot motion's method for parameter estimation based on Unscented kalman filtering
Shademan et al. Sensitivity analysis of EKF and iterated EKF pose estimation for position-based visual servoing
CN116747026B (en) Intelligent robot bone cutting method, device and equipment based on deep reinforcement learning
CN114763133A (en) Vehicle parking planning method, device, equipment and computer storage medium
CN116125906A (en) Motion planning method, device and equipment for numerical control machining and storage medium
CN113763434B (en) Target track prediction method based on Kalman filtering multi-motion model switching
CN117860382B (en) Navigation surgery mechanical arm vision servo pose prediction PD control method based on LSTM
CN109764876B (en) Multi-mode fusion positioning method of unmanned platform
CN116889471B (en) Method, device and equipment for selecting and solving optimal joint angle of navigation operation mechanical arm
CN1330466C (en) On-line robot hand and eye calibrating method based on motion selection
CN111113430B (en) Robot and tail end control method and device thereof
CN114851190B (en) Low-frequency drive and control integrated-oriented mechanical arm track planning method and system
CN115648211A (en) Method, device and equipment for compensating attitude error of mechanical arm and storage medium
CN113064154B (en) Aerial target tracking method
CN112643674B (en) Robot following machining workpiece surface compensation method, robot and storage device
CN115870976B (en) Sampling track planning method and device for mechanical arm and electronic equipment
CN116652972B (en) Series robot tail end track planning method based on bidirectional greedy search algorithm
CN118044883B (en) Intelligent navigation operation mechanical arm path planning method and device
CN114545936B (en) Robot path optimization method and system based on obstacle avoidance planning
CN116619393B (en) Mechanical arm admittance variation control method, device and equipment based on SVM
Zak et al. A prediction based strategy for robotic interception of moving targets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant