CN117084788A - Method and device for determining target gesture of mechanical arm and storage medium - Google Patents

Method and device for determining target gesture of mechanical arm and storage medium Download PDF

Info

Publication number
CN117084788A
CN117084788A CN202210513526.4A CN202210513526A CN117084788A CN 117084788 A CN117084788 A CN 117084788A CN 202210513526 A CN202210513526 A CN 202210513526A CN 117084788 A CN117084788 A CN 117084788A
Authority
CN
China
Prior art keywords
mechanical arm
target
pose
gesture
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210513526.4A
Other languages
Chinese (zh)
Inventor
康永利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tinavi Medical Technologies Co Ltd
Original Assignee
Tinavi Medical Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tinavi Medical Technologies Co Ltd filed Critical Tinavi Medical Technologies Co Ltd
Priority to CN202210513526.4A priority Critical patent/CN117084788A/en
Publication of CN117084788A publication Critical patent/CN117084788A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points

Abstract

The disclosure provides a method and a device for determining a target gesture of a mechanical arm and a storage medium. The method comprises the following steps: acquiring the position of a target object, determining an initial pose matrix according to the position of the target object, and predicting a target pose matrix generated by the mechanical arm after the mechanical arm moves to the position by using the initial pose matrix; performing inverse solution operation on the target pose matrixes to obtain various mechanical arm poses corresponding to each target pose matrix; determining the mutual position relation between three-dimensional models in a robot-assisted orthopaedic surgery platform, establishing a linear three-dimensional model between a navigation tracker and a tracer, and judging whether each mechanical arm gesture is a non-shielding mechanical arm gesture according to the mutual position relation and the linear three-dimensional model; screening the non-occlusion mechanical arm gesture by utilizing a preset target gesture judgment condition, and taking the non-occlusion mechanical arm gesture meeting the target gesture judgment condition as the target gesture. The safety and operability of the operation are guaranteed, and the operation efficiency is improved.

Description

Method and device for determining target gesture of mechanical arm and storage medium
Technical Field
The disclosure relates to the technical field of robot-assisted orthopaedic surgery platforms, and in particular relates to a method and device for determining a target posture of a mechanical arm and a storage medium.
Background
The robot-assisted orthopedic navigation system is a system for assisting a clinician in performing an orthopedic operation by using a mechanical arm for positioning and navigation, and has been widely used in various surgical formulas of the orthopedic operation at present. The robot assisted orthopaedics navigation system adopts a mechanical arm to position and navigate surgical instruments, and in the surgical process, the mechanical arm needs to move to a planned position and gesture, and a doctor can perform orthopaedics operation through a tool at the tail end of the mechanical arm.
The existing robot assisted orthopaedics navigation system only considers the problem of tracking interruption between a navigation tracker and a tracer in the operation process, and avoids sight line obstacle between the navigation tracker and the tracer by limiting the movement space of the mechanical arm through a virtual boundary, but the mode cannot be used for determining the target gesture of the mechanical arm after moving in place; in addition, since the mechanical arm moves in place, there is a possibility that the mechanical arm hinders a doctor from operating and colliding with surrounding objects, and there is a possibility that an obstacle is generated between the tracking element and the navigation tracker. Therefore, the existing robot assisted orthopedic navigation system cannot determine the target gesture of the mechanical arm which meets the operation requirement and has the most reasonable gesture, so that the operation is easy to break, and the operation safety and operation efficiency are reduced.
Disclosure of Invention
In view of the above, embodiments of the present disclosure provide a method, an apparatus, and a storage medium for determining a target pose of a mechanical arm, so as to solve the problems in the prior art that the target pose of the mechanical arm that meets the requirement of a surgery and has the most reasonable pose cannot be determined, resulting in easy interruption of the surgery, and reduced safety and efficiency of the surgery.
In a first aspect of the embodiments of the present disclosure, a method for determining a target pose of a mechanical arm is provided, including: acquiring the position of a target object in a robot-assisted orthopaedic surgery platform, determining an initial pose matrix according to the position of the target object, and predicting a target pose matrix generated by a mechanical arm after the mechanical arm moves in place by using the initial pose matrix; performing inverse solution operation on the target pose matrixes to obtain multiple mechanical arm poses corresponding to each target pose matrix, wherein the mechanical arm poses represent angle values of all joints under each mechanical arm pose; determining the mutual position relation between three-dimensional models in a robot-assisted orthopaedic surgery platform according to the initial pose matrix and the mechanical arm poses, establishing a linear three-dimensional model between a navigation tracker and a tracer, and judging whether each mechanical arm pose is a non-shielding mechanical arm pose according to the mutual position relation and the linear three-dimensional model; and obtaining the non-occlusion mechanical arm gesture, screening the non-occlusion mechanical arm gesture by utilizing a preset target gesture judgment condition, and taking the non-occlusion mechanical arm gesture meeting the target gesture judgment condition as the target gesture.
In a second aspect of the embodiments of the present disclosure, there is provided a device for determining a target pose of a mechanical arm, including: the acquisition module is configured to acquire the position of a target object in the robot-assisted orthopaedic surgery platform, determine an initial pose matrix according to the position of the target object, and predict a target pose matrix generated after the mechanical arm moves in place by using the initial pose matrix; the inverse solution module is configured to perform inverse solution operation on the target pose matrixes to obtain a plurality of mechanical arm postures corresponding to each target pose matrix, wherein the mechanical arm postures represent angle values of all joints under each mechanical arm posture; the judging module is configured to determine the mutual position relation between the three-dimensional models in the robot-assisted orthopaedic surgery platform according to the initial pose matrix and the mechanical arm pose, establish a linear three-dimensional model between the navigation tracker and the tracer, and judge whether each mechanical arm pose is a non-shielding mechanical arm pose according to the mutual position relation and the linear three-dimensional model; the screening module is configured to obtain the non-occlusion mechanical arm gesture, screen the non-occlusion mechanical arm gesture by utilizing a preset target gesture judgment condition, and take the non-occlusion mechanical arm gesture meeting the target gesture judgment condition as the target gesture.
In a third aspect of the disclosed embodiments, a computer-readable storage medium is provided, which stores a computer program which, when executed by a processor, implements the steps of the above-described method.
The above-mentioned at least one technical scheme that the embodiment of the disclosure adopted can reach following beneficial effect:
determining an initial pose matrix according to the position of a target object in the robot-assisted orthopaedic surgery platform by acquiring the position of the target object, and predicting a target pose matrix generated after the mechanical arm moves in place by using the initial pose matrix; performing inverse solution operation on the target pose matrixes to obtain multiple mechanical arm poses corresponding to each target pose matrix, wherein the mechanical arm poses represent angle values of all joints under each mechanical arm pose; determining the mutual position relation between three-dimensional models in a robot-assisted orthopaedic surgery platform according to the initial pose matrix and the mechanical arm poses, establishing a linear three-dimensional model between a navigation tracker and a tracer, and judging whether each mechanical arm pose is a non-shielding mechanical arm pose according to the mutual position relation and the linear three-dimensional model; and obtaining the non-occlusion mechanical arm gesture, screening the non-occlusion mechanical arm gesture by utilizing a preset target gesture judgment condition, and taking the non-occlusion mechanical arm gesture meeting the target gesture judgment condition as the target gesture. The method and the device ensure that all tracers after the mechanical arm moves in place are visible in the navigation tracker, so that the operation process cannot be interrupted, and the target gesture of the mechanical arm screened by the method and the device has very reasonable positions and gestures, so that the safety and operability of the operation are ensured, the operation flow is simplified, and the operation efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required for the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a flowchart of a method for determining a target pose of a mechanical arm according to an embodiment of the disclosure;
FIG. 2 is a flow chart of a field of view detection algorithm provided by an embodiment of the present disclosure;
FIG. 3 is a flow diagram of a gesture optimization algorithm provided by an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a device for determining a target pose of a mechanical arm according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
As described in the background, after positioning the surgical field with the robotic-assisted orthopaedic navigation system, the surgeon may use the robotic arm to perform the procedure. A robotic-assisted orthopaedic navigation system typically includes a navigation tracker (e.g., NDI camera) and a tracking element (e.g., an electronic or optical tracking element) mounted to the surgical instrument and the patient, the positional relationship of the surgical instrument and the patient being determined by receiving tracking element signals from the navigation tracker. The tracking element in the robot-assisted orthopaedic navigation system must be within the field of view of the navigation tracker and no obstacle can exist between the tracking element and the navigation tracker to be located.
In performing an operation using a robot arm, the robot arm needs to be moved to a planned position and posture, and a doctor can perform the operation (such as an orthopedic operation) through a robot arm end tool, which is a tool installed at the end of the robot arm to assist the doctor in the operation. Because the mechanical arm can continue the operation after moving in place in the mechanical arm assisted orthopaedic operation, the target gesture after the mechanical arm moves in place is not unique, but a proper gesture must be selected as the target gesture after the mechanical arm moves in place, so how to screen out the proper target gesture of the mechanical arm after the mechanical arm moves in place, which is important for the mechanical arm assisted orthopaedic operation.
In view of the above, an embodiment of the present disclosure provides a method for determining a target pose of a manipulator in a robot-assisted orthopaedic surgery, by calculating all target pose matrices of the manipulator satisfying the robot-assisted orthopaedic surgery, calculating various manipulator poses corresponding to the target pose matrices by using a manipulator inverse solution algorithm, screening out all manipulator poses satisfying non-occlusion conditions by using a field of view detection algorithm, and finally further screening out a manipulator target pose satisfying target pose judgment conditions from among the screened manipulator poses satisfying non-occlusion conditions by using a pose optimization algorithm. According to the embodiment of the disclosure, all tracers after the mechanical arm moves in place are visible in the navigation tracker, so that position tracking display is uninterrupted in the orthopedic operation process, and finally, the target gesture of the mechanical arm obtained through screening has reasonable position and gesture, so that the safety and operability of an operation are guaranteed.
It should be noted that, the following embodiments of the present disclosure are described by taking a determination process of a target gesture of a mechanical arm in a robot-assisted orthopaedic surgery as an example, but it should be understood that the embodiments of the present disclosure are not limited to the orthopaedic surgery, and any other surgery implemented based on a robot-assisted navigation system is applicable to the present solution, and the application scenarios in the following embodiments do not constitute a limitation to the technical solution of the present disclosure. The technical scheme of the present disclosure is described in detail below with reference to specific embodiments.
Fig. 1 is a flowchart illustrating a method for determining a target pose of a mechanical arm according to an embodiment of the disclosure. The method of determining the target pose of the robotic arm of fig. 1 may be performed by a robotic-assisted orthopaedic navigation system. As shown in fig. 1, the method for determining the target gesture of the mechanical arm specifically may include:
s101, acquiring the position of a target object in a robot-assisted orthopaedic surgery platform, determining an initial pose matrix according to the position of the target object, and predicting a target pose matrix generated after the mechanical arm moves in place by using the initial pose matrix;
s102, performing inverse solution operation on target pose matrixes to obtain multiple mechanical arm poses corresponding to each target pose matrix, wherein the mechanical arm poses represent angle values of all joints under each mechanical arm pose;
s103, determining the mutual position relation between three-dimensional models in the robot-assisted orthopaedic surgery platform according to the initial pose matrix and the mechanical arm poses, establishing a linear three-dimensional model between a navigation tracker and a tracer, and judging whether each mechanical arm pose is a non-shielding mechanical arm pose according to the mutual position relation and the linear three-dimensional model;
s104, obtaining the non-occlusion mechanical arm gesture, screening the non-occlusion mechanical arm gesture by utilizing a preset target gesture judgment condition, and taking the non-occlusion mechanical arm gesture meeting the target gesture judgment condition as the target gesture.
Specifically, the target objects of the embodiments of the present disclosure refer to various equipment objects, tool objects, and operation planning positions for assisting a doctor in performing an operation in a robot-assisted orthopaedic surgery platform, and the like. Because the surgical instruments or surgical tools used by the different robot-assisted orthopedic surgical platforms are not identical, in practical application, the target object is not completely fixed, and can be freely set according to different application scenes. The robot-assisted orthopaedic surgery platform comprises other equipment besides a robot-assisted orthopaedic navigation system, such as an operating table, a controller and the like.
Further, the target pose of the manipulator of the embodiments of the present disclosure is determined intraoperatively, and the pose of the manipulator may be different when reaching the surgical site due to the relatively large number of styles of end tools used for surgery and the large degree of freedom of the manipulator. The robotic arm of the disclosed embodiments may be a serial multi-joint robotic arm (i.e., a multi-axis serial robotic arm), although other types of robotic arms may also be employed, and the disclosed embodiments are not limited to a particular type of robotic arm.
Further, the three-dimensional model of the embodiment of the disclosure includes three-dimensional models corresponding to all machines and devices in the robot-assisted orthopaedic surgical platform, including three-dimensional models in the robot-assisted orthopaedic navigation system, such as a mechanical arm, a navigation tracker, a tracer, and the like, and three-dimensional models of other devices outside the assisted orthopaedic navigation system, such as models of tools, models of surgical beds, and the like. It should be noted that, in the following embodiments of the present disclosure, the tracking element and the tracer have the same meaning, and the substitution on terms does not constitute a limitation on the technical solution of the present disclosure.
According to the technical scheme provided by the embodiment of the disclosure, the position of the target object in the robot-assisted orthopaedic surgery platform is obtained, an initial pose matrix is determined according to the position of the target object, and the initial pose matrix is utilized to predict the target pose matrix generated after the mechanical arm moves in place; performing inverse solution operation on the target pose matrixes to obtain multiple mechanical arm poses corresponding to each target pose matrix, wherein the mechanical arm poses represent angle values of all joints under each mechanical arm pose; determining the mutual position relation between three-dimensional models in a robot-assisted orthopaedic surgery platform according to the initial pose matrix and the mechanical arm poses, establishing a linear three-dimensional model between a navigation tracker and a tracer, and judging whether each mechanical arm pose is a non-shielding mechanical arm pose according to the mutual position relation and the linear three-dimensional model; and obtaining the non-occlusion mechanical arm gesture, screening the non-occlusion mechanical arm gesture by utilizing a preset target gesture judgment condition, and taking the non-occlusion mechanical arm gesture meeting the target gesture judgment condition as the target gesture. The method and the device ensure that all tracers after the mechanical arm moves in place are visible in the navigation tracker, so that the operation process cannot be interrupted, and the target gesture of the mechanical arm screened by the method and the device has very reasonable positions and gestures, so that the safety and operability of the operation are ensured, the operation flow is simplified, and the operation efficiency is improved.
In some embodiments, obtaining a position of a target object in a robot-assisted orthopaedic surgical platform, determining an initial pose matrix from the position of the target object comprises: determining the position of a target object by using a robot-assisted orthopaedics navigation system, establishing a coordinate system taking the target object as an origin according to the position of the target object, and generating a preset initial pose matrix according to the position of the target object and the coordinate system; the target object comprises a navigation tracker, a mechanical arm tail end tracer, a patient tracer, a surgical planning position, a mechanical arm tail end tool and a mechanical arm base, and the coordinate system comprises a navigation tracker coordinate system, a patient tracer coordinate system, a mechanical arm tail end coordinate system and a mechanical arm base coordinate system.
Specifically, before all reachable target pose matrixes of the mechanical arm meet the operation requirement are predicted, an initial pose matrix needs to be acquired, and the initial pose matrix can be regarded as a preset basic pose matrix for generating the target pose matrix. In practical applications, the initial pose matrix may be determined according to the position of the target object and a coordinate system with the target object as an origin. In the embodiment of the disclosure, the target object is used as the origin to establish the coordinate system, so that the placement positions of the target objects are different, and the corresponding coordinate systems are also different.
Further, the initial pose matrix in embodiments of the present disclosure includes, but is not limited to, the following pose matrices: the system comprises an initial pose matrix of the mechanical arm end tracer under a navigation tracker coordinate system, an initial pose matrix of the patient tracer under the navigation tracker coordinate system, an initial pose matrix of the operation planning position under the patient tracer coordinate system, an initial pose matrix of the mechanical arm end tool under the mechanical arm end tracer coordinate system, an initial pose matrix of the mechanical arm end tracer under the mechanical arm end coordinate system, and an initial pose matrix of the mechanical arm end under the mechanical arm base coordinate system.
Further, when the navigation tracker determines the spatial position of each tracer, the navigation tracker can send an optical signal to each tracer (generally an optical tracer) through the navigation tracker, then determine the position according to the optical signal reflected by each tracer, or actively send a signal to the navigation tracker through the tracer (generally an electronic tracer), and the navigation tracker determines the position according to the received signal. Therefore, when the pose matrix of the tracer under the coordinates of the navigation tracker is determined, the data of the tracer can be directly obtained from the navigation tracker, and after the tracer is observed in the field of view by the navigation tracker, the pose matrix of the tracer under the coordinates of the navigation tracker can be determined.
In some embodiments, predicting a target pose matrix generated by the robotic arm after moving into position using the initial pose matrix includes calculating the target pose matrix using the following formula:
T7=T5.T6.T1 -1 .T2·T3·T4 -1 ·T6 -1
wherein T7 represents a target pose matrix, T1 represents an initial pose matrix of the robot arm end tracer in a navigation tracker coordinate system, T2 represents an initial pose matrix of the patient tracer in a navigation tracker coordinate system, T3 represents an initial pose matrix of the surgery planning position in the patient tracer coordinate system, T4 represents an initial pose matrix of the robot arm end tool in the robot arm end tracer coordinate system, T5 represents an initial pose matrix of the robot arm end tracer in the robot arm end coordinate system, and T6 represents an initial pose matrix of the robot arm end in the robot arm base coordinate system.
Specifically, all target pose matrixes corresponding to the mechanical arm meeting the robot assisted surgery can be calculated through the formula, the initial pose matrixes corresponding to T3 and T4 in the formula correspond to a plurality of different poses in an actual application scene, so that the initial pose matrixes of T3 and T4 are changed, the rest initial pose matrixes can correspond to fixed poses, and different target pose matrixes can be obtained based on the formula.
Further, the pose matrix of embodiments of the present disclosure, also referred to as a rotational translation matrix, may be considered a combination of a rotational matrix and a translation matrix, the pose matrix being used to describe the orientation of an object. Because the target pose matrix T7 represents the pose matrix of the tail end of the mechanical arm under the coordinate system of the base of the mechanical arm after the mechanical arm moves in place, the target pose matrix can be understood as all the pose matrices corresponding to the tail end of the mechanical arm when the tail end of the mechanical arm moves to the operation planning position.
Further, the tracers of the embodiments of the present disclosure are respectively mounted on the robot arm end and the patient, i.e., the tracers are divided into a robot arm end tracer and a patient tracer, and when the tracers are mounted on the robot arm end, the robot arm end tracer may be mounted on a robot arm end tool. It should be noted that, the installation positions and the installation structures of the instruments and tools in the embodiments of the present disclosure do not limit the technical solutions of the present disclosure.
In some embodiments, performing an inverse solution operation on the target pose matrices to obtain a plurality of mechanical arm poses corresponding to each target pose matrix, including: and taking the target pose matrix and the mechanical arm parameters corresponding to each axis in the mechanical arm as inputs of a mechanical arm inverse solution algorithm, and sequentially carrying out inverse solution calculation on each target pose matrix by using the mechanical arm inverse solution algorithm to obtain the angle values of each joint in the mechanical arm corresponding to each target pose matrix, wherein each mechanical arm gesture corresponds to a group of angle values of the joints.
Specifically, after a plurality of target pose matrixes are obtained through calculation, all the corresponding mechanical arm poses of each target pose matrix are calculated through a mechanical arm inverse solution algorithm, namely the mechanical arm poses corresponding to the target pose matrixes are calculated through the mechanical arm inverse solution algorithm, wherein the mechanical arm inverse solution algorithm is used for calculating various poses corresponding to the mechanical arm based on the mechanical arm target pose matrixes, so that each mechanical arm pose corresponds to a set of joint angle values, and the joint angle values represent angles of joints corresponding to the mechanical arm under the mechanical arm poses. The principle and implementation of the mechanical arm inverse solution are described in detail below with reference to specific embodiments, and may specifically include the following:
in the multi-axis serial mechanical arm, the mechanical arm is a mechanical arm with a certain degree of freedom and formed by connecting a plurality of axes in series, each axis corresponds to an angle value of a joint, and the angle value of each joint can be converted into a pose matrix corresponding to the joint. For example, the pose matrix corresponding to each axis is determined according to the angle value of each joint in the mechanical arm, and if the mechanical arm has six axes, the pose matrix corresponding to the tail end of the mechanical arm can be obtained by multiplying the pose matrices corresponding to the six axes. The angle values of the joints can be converted into the pose matrixes corresponding to the joints, and the pose matrixes of the joints can be obtained after multiplication, so that the angle values of the joints under the target pose matrix can be inversely solved through the target pose matrix.
Further, the input variables of the mechanical arm inverse solution algorithm are a target pose matrix T7 corresponding to the end of the mechanical arm and mechanical arm parameters corresponding to each axis in the mechanical arm, where the mechanical arm parameters may include a length value of the axis and an angle value of the axis. In practical application, since each arm posture corresponds to a set of joint angle values, the joint angle values correspond to angles of respective joints in the arm, so that the joint angle values of different combinations correspond to different arm postures.
In some embodiments, determining a mutual positional relationship between three-dimensional models in a robot-assisted orthopaedic surgical platform and establishing a linear three-dimensional model between a navigation tracker and a tracer according to an initial pose matrix and a manipulator pose, comprising: loading a three-dimensional model in a preset robot-assisted orthopaedic surgery platform, determining the spatial positions of mechanical arm models in the three-dimensional model according to the gestures of the mechanical arm, determining the spatial positions of other three-dimensional models in the three-dimensional model according to the initial pose matrix, and determining the mutual position relationship between the three-dimensional models according to the spatial positions; and establishing a connection line between the signal transmitter and the signal receiver, and generating a linear three-dimensional model according to the connection line, wherein the signal transmitter comprises at least one signal transmitter installed on a navigation tracker, and the signal receiver comprises a plurality of signal receivers installed on a tail end tracer of the mechanical arm and a patient tracer.
Specifically, when screening out all the poses of the mechanical arm meeting the non-shielding condition by using a view field detection algorithm, all three-dimensional models are loaded in system software (three-dimensional software installed in a robot-assisted orthopaedics navigation system) at first, and a linear three-dimensional model between a signal transmitter and a signal receiver is established. When loading the three-dimensional models, the three-dimensional models can be loaded from a third party library of the system, each three-dimensional model corresponds to a model file with the format of stl, and the corresponding three-dimensional model is directly obtained after the model file is loaded by using system software. In practical applications, the three-dimensional model includes 3D models corresponding to all instruments and tools involved in the robot-assisted orthopaedic surgical platform, such as a three-dimensional model of a patient tracer, a three-dimensional model of a robotic arm end tracer, a three-dimensional model of a navigation tracker, a three-dimensional model of an operating table, and the like.
Further, after the system software finishes loading all the three-dimensional models, determining the corresponding positions of the three-dimensional models according to the gesture of the mechanical arm and the initial gesture matrix, and determining the mutual position relations among all the models in the three-dimensional software according to the positions of the three-dimensional models. In practical applications, the three-dimensional model corresponding to the manipulator may be determined according to the pose of the manipulator, while the other three-dimensional models are determined according to the initial pose matrices T1, T2, T3, etc. in the foregoing embodiments.
Further, when the linear three-dimensional model is built, a connecting line between the signal transmitter on the navigation tracker and the signal receiver on each tracer is built first, and then the linear three-dimensional model is generated according to the connecting line. In practical applications, the signal transmitter is an element for transmitting an optical signal, such as an optical reflection ball, mounted on the navigation tracker, and the signal receiver is an element for receiving an optical signal mounted on the tracker. It should be noted that the navigation tracker may have a plurality of signal transmitters, and a plurality of signal receivers may be disposed on each tracker, so that each signal transmitter has a connection relationship with a plurality of signal receivers on each tracker.
In some embodiments, determining whether each of the arm poses is an unobstructed arm pose according to the mutual positional relationship and the linear three-dimensional model includes: judging whether the linear three-dimensional model passes through other three-dimensional models in the space according to the space position and the mutual position relation of the three-dimensional models, when the linear three-dimensional model passes through the other three-dimensional models except the three-dimensional models at the two ends of the connecting line, shielding exists between the navigation tracker and the tracer corresponding to the mechanical arm in the mechanical arm posture, and otherwise, determining the mechanical arm posture as the mechanical arm posture without shielding.
Specifically, the embodiment of the disclosure screens out all the mechanical arm postures meeting the non-shielding condition by using a visual field detection algorithm, namely, judges whether the mechanical arm postures are non-shielding mechanical arm postures, wherein the non-shielding refers to that no other three-dimensional model is shielded between the navigation tracker and the tracer. The implementation process of the field detection algorithm is described in detail below with reference to the accompanying drawings, and fig. 2 is a schematic flow chart of the field detection algorithm provided in an embodiment of the disclosure. As shown in fig. 2, the field of view detection algorithm may include the steps of:
s201, loading all three-dimensional models in three-dimensional software, wherein the three-dimensional models comprise a three-dimensional model of a patient tracer, a three-dimensional model of a tracer at the tail end of a mechanical arm, a three-dimensional model of a navigation tracker and the like;
s202, sequentially selecting one mechanical arm gesture from all mechanical arm gestures obtained by a mechanical arm inverse solution algorithm;
s203, determining the mutual position relation among all three-dimensional models in the three-dimensional software according to the gesture of the mechanical arm and the initial gesture matrix;
s204, establishing a linear three-dimensional model of the signal transmitter connected to each signal receiver;
s205, judging whether the linear three-dimensional model passes through other three-dimensional models or not; returning to S202 when judging that the linear three-dimensional model passes through other three-dimensional models, otherwise executing the next step;
S206, storing the mechanical arm gesture as a non-shielding mechanical arm gesture into a software variable;
s207, judging whether all the mechanical arm postures are calculated; and returning to S202 when judging that all the mechanical arm postures are not calculated, otherwise ending the calculation flow.
Further, in the above-mentioned view field detection algorithm, if it is determined that the straight line three-dimensional model passes through the three-dimensional model of other tool or other tracer than the three-dimensional models at both ends of the straight line, it means that there is a shielding between the navigation tracer and the tracking element of the tracer in the posture of the robot arm, and only when the straight line three-dimensional model does not pass through any other three-dimensional model, the posture is regarded as the posture of the robot arm without shielding.
According to the technical scheme provided by the embodiment of the disclosure, the non-shielding mechanical arm gesture is screened out through the visual field detection algorithm, so that all tracers are visible in the navigation tracker after the mechanical arm moves in place, and the orthopedic operation is uninterrupted.
In some embodiments, obtaining an unobstructed mechanical arm pose, and screening the unobstructed mechanical arm pose using a predetermined target pose determination condition, including: acquiring all the non-occlusion mechanical arm postures, and sequentially screening the non-occlusion mechanical arm postures by utilizing target posture judgment conditions in a preset posture optimization algorithm to obtain the non-occlusion mechanical arm postures meeting the target posture judgment conditions; the target posture judging conditions comprise a first judging condition, a second judging condition, a third judging condition and a fourth judging condition.
Specifically, after all the non-occlusion mechanical arm postures are screened out by using a view field detection algorithm, the mechanical arm target posture meeting the target posture judgment condition is further screened out from all the screened non-occlusion mechanical arm postures according to a posture optimization algorithm. In practical application, the gesture optimization algorithm comprises four target gesture judgment conditions for sequentially executing screening operation, and the non-shielding mechanical arm gesture is sequentially screened by using the target gesture judgment conditions, wherein the screening object of the current target gesture judgment condition is the mechanical arm gesture screened by the previous target gesture judgment condition. The contents of the four target posture judgment conditions in the posture optimization algorithm are specifically described below with reference to specific embodiments.
In some embodiments, the target gesture determination conditions in the predetermined gesture optimization algorithm are used to sequentially screen the pose of the non-shielding mechanical arm, and the following detailed description of the process of screening the pose of the mechanical arm by using the target gesture determination conditions may specifically include the following:
the first judgment condition is used for judging whether the tail end tracer of the mechanical arm can be observed by the navigation tracker according to the connecting line between the signal transmitter and the signal receiver and the angle of the navigation tracker, and executing the second judgment condition when the tail end tracer of the mechanical arm can be observed; if not, when judging that the observation cannot be performed, sequentially selecting the next mechanical arm gesture from all the non-shielding mechanical arm gestures, and starting to judge by using the first judging condition again;
The second judgment condition is used for judging whether the gesture of the non-shielded mechanical arm approaches to the odd dislocation according to the angle value of each joint in the mechanical arm, and executing the third judgment condition when the gesture of the non-shielded mechanical arm is not the odd dislocation; otherwise, when judging that the non-shielding mechanical arm gesture is a singular position, sequentially selecting the next mechanical arm gesture from all the non-shielding mechanical arm gestures, and starting to judge by re-using the first judging condition;
the third judging condition is used for judging whether the tail end of the mechanical arm is below a target point corresponding to the operation position under the non-shielding mechanical arm posture, and executing the fourth judging condition when the tail end of the mechanical arm is not below the target point; otherwise, when the tail end of the mechanical arm is judged to be below the target point, sequentially selecting the next mechanical arm posture from all the non-shielding mechanical arm postures, and starting to judge by using the first judging condition;
the fourth judgment condition is used for judging whether the target point is between the tail end of the mechanical arm and the mechanical arm base under the non-shielding mechanical arm gesture, and when the target point is judged not to be between the tail end of the mechanical arm and the mechanical arm base, the non-shielding mechanical arm gesture meeting the judgment condition is taken as the target gesture; otherwise, when the target point is judged to be between the tail end of the mechanical arm and the mechanical arm base, sequentially selecting the next mechanical arm posture from all the non-shielding mechanical arm postures, and starting to judge by using the first judgment condition again until the non-shielding mechanical arm postures meeting all the judgment conditions are screened out.
The first, second, third and fourth judging conditions may be four judging conditions executed sequentially, that is, the pose of the non-shielding mechanical arm is screened by using the four judging conditions sequentially according to the sequence; it should be understood that the screening process of the target gesture according to the embodiments of the present disclosure is not limited to the execution order of the above determination conditions, and the above "first", "second", "third" and "fourth" do not constitute a limitation on the execution order.
Specifically, the gesture optimization algorithm is used for further screening the target gesture from the non-occlusion mechanical arm gesture according to the non-occlusion mechanical arm gesture screened by the visual field detection algorithm. The implementation process of the gesture optimization algorithm is described in detail below with reference to the accompanying drawings, and fig. 3 is a schematic flow chart of the gesture optimization algorithm provided by the embodiment of the disclosure. As shown in fig. 3, the pose optimization algorithm may include the steps of:
s301, acquiring all the non-shielding mechanical arm gestures screened by a visual field detection algorithm;
s302, sequentially selecting one mechanical arm gesture from all the non-shielding mechanical arm gestures;
s303, judging whether the corresponding mechanical arm tail end tracer of the mechanical arm under the gesture can be observed by the navigation tracker; returning to S302 when judging that the navigation tracking device can not observe the navigation tracking device, otherwise executing the next step;
S304, judging whether the mechanical arm approaches to a singular position of the mechanical arm in the gesture; returning to S302 when the posture of the mechanical arm is judged to be close to the odd abnormal position, otherwise executing the next step;
s305, judging whether the tail end of the mechanical arm is below a target point under the gesture; returning to S302 when the tail end of the mechanical arm is judged to be below the target point, otherwise executing the next step;
s306, judging whether the target point of the mechanical arm is between the tail end of the mechanical arm and the mechanical arm base in the gesture; returning to S302 when the target point is judged to be between the tail end of the mechanical arm and the mechanical arm base, otherwise executing the next step;
s307, storing the gesture of the mechanical arm into a software variable as a target gesture meeting the condition;
s308, judging whether all the mechanical arm postures are calculated; returning to the step S302 when judging that all the mechanical arm postures are not calculated, otherwise executing the next step;
s309, selecting any one of the screened target postures meeting the conditions as the posture of the mechanical arm after moving in place in the operation.
Further, in the above determination in step S304, the odd position is an inherent posture of the serial mechanical arms, each serial mechanical arm corresponds to one or several odd positions, and the odd position indicates a posture of the robot with unstable movement caused by an excessive joint speed of the mechanical arm at the position. The singular position of the mechanical arm is a position with reduced mechanical arm freedom degree caused by overlapping mechanical arm freedom degrees in the movement process of the serial mechanical arm, when the mechanical arm approaches to the singular position, the tiny displacement variable quantity of the mechanical arm can cause the severe change of angles of certain axes of the mechanical arm, the nearly infinite angular velocity is generated, and the joint velocity of the mechanical arm is overlarge.
In practical application, when judging whether the mechanical arm approaches to the odd abnormal position of the mechanical arm in the posture, the jacobian matrix of the posture of the mechanical arm can be calculated according to the angle value of each joint corresponding to the mechanical arm in the posture, when the mechanical arm is in the odd abnormal position, the rank of the jacobian matrix is reduced, the determinant value of the jacobian matrix is zero, and the inverse matrix of the jacobian matrix of the mechanical arm cannot be calculated, so that the value of the angular velocity is near to infinity when the angular velocity of the joint of the mechanical arm is calculated. Wherein the jacobian matrix represents the relationship between joint velocity and end cartesian velocity differential motion, which includes rotation and translation, and the differential motion is the corresponding linear velocity and angular velocity.
Further, in the determination in step S305, when determining whether the arm end is below the target point in the posture, the determination method is to calculate the angle between the axis of the arm end and the Z axis of the world coordinate system, and when the angle is smaller than 90 °, it means that the arm end is below the target point in the posture, and when the arm end is below the target point, the operation is affected and the possibility that the arm end collides with the patient or the operating table may occur.
Further, in the determination in step S306, when determining whether the target point of the manipulator is between the manipulator end and the manipulator base in the posture, the determination method is to calculate the included angle between the projection of the manipulator end on the XY plane of the manipulator base coordinate system and the vector of the origin of the manipulator base coordinate system pointing to the manipulator end, where the included angle is greater than 90 °, the patient is caused to be between the manipulator end and the manipulator base, and in this posture, after the manipulator is extended, the joint of the manipulator is reversely bent, so that the manipulator generates a folded posture, and the operation is affected.
According to the technical scheme provided by the embodiment of the disclosure, the object posture judgment condition is further utilized to judge and screen all the screened non-shielding mechanical arm postures through the posture optimization algorithm, so that the finally screened mechanical arm posture is the most reasonable and comfortable posture meeting the operation requirement, and the safety and operability of the operation are ensured.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
Fig. 4 is a schematic structural diagram of a device for determining a target pose of a mechanical arm according to an embodiment of the disclosure. As shown in fig. 4, the device for determining the target attitude of the mechanical arm includes:
the acquisition module 401 is configured to acquire the position of a target object in the robot-assisted orthopaedic surgery platform, determine an initial pose matrix according to the position of the target object, and predict a target pose matrix generated after the mechanical arm moves in place by using the initial pose matrix;
the inverse solution module 402 is configured to perform inverse solution operation on the target pose matrixes to obtain a plurality of mechanical arm poses corresponding to each target pose matrix, wherein the mechanical arm poses represent angle values of each joint under each mechanical arm pose;
the judging module 403 is configured to determine the mutual position relation between the three-dimensional models in the robot-assisted orthopaedic surgery platform according to the initial pose matrix and the pose of the mechanical arm, establish a linear three-dimensional model between the navigation tracker and the tracer, and judge whether each pose of the mechanical arm is a pose of the mechanical arm without shielding according to the mutual position relation and the linear three-dimensional model;
and a screening module 404 configured to obtain an unoccluded robot arm posture, screen the unoccluded robot arm posture with a predetermined target posture determination condition, and take the unoccluded robot arm posture satisfying the target posture determination condition as the target posture.
In some embodiments, the acquisition module 401 of fig. 4 determines the position of the target object using the robot-assisted orthopaedic navigation system, establishes a coordinate system with the target object as an origin according to the position of the target object, and generates a predetermined initial pose matrix according to the position of the target object and the coordinate system; the target object comprises a navigation tracker, a mechanical arm tail end tracer, a patient tracer, a surgical planning position, a mechanical arm tail end tool and a mechanical arm base, and the coordinate system comprises a navigation tracker coordinate system, a patient tracer coordinate system, a mechanical arm tail end coordinate system and a mechanical arm base coordinate system.
In some embodiments, the acquisition module 401 of fig. 4 calculates the target pose matrix using the following formula:
T7=T5·T6·T1 -1 .T2·T3·T4 -1 .T6 -1
wherein T7 represents a target pose matrix, T1 represents an initial pose matrix of the robot arm end tracer in a navigation tracker coordinate system, T2 represents an initial pose matrix of the patient tracer in a navigation tracker coordinate system, T3 represents an initial pose matrix of the surgery planning position in the patient tracer coordinate system, T4 represents an initial pose matrix of the robot arm end tool in the robot arm end tracer coordinate system, T5 represents an initial pose matrix of the robot arm end tracer in the robot arm end coordinate system, and T6 represents an initial pose matrix of the robot arm end in the robot arm base coordinate system.
In some embodiments, the inverse solution module 402 of fig. 4 uses the target pose matrix and the mechanical arm parameters corresponding to each axis in the mechanical arm as inputs of the mechanical arm inverse solution algorithm, and performs inverse solution calculation on each target pose matrix in turn by using the mechanical arm inverse solution algorithm, to obtain the angle value of each joint in the mechanical arm corresponding to each target pose matrix, where each mechanical arm pose corresponds to a set of angle values of the joints.
In some embodiments, the determining module 403 of fig. 4 loads the three-dimensional model in the preset robot-assisted orthopaedic surgery platform, determines the spatial position of the mechanical arm model in the three-dimensional model according to the mechanical arm pose, determines the spatial positions of other three-dimensional models in the three-dimensional model according to the initial pose matrix, and determines the mutual positional relationship between the three-dimensional models according to the spatial positions; and establishing a connection line between the signal transmitter and the signal receiver, and generating a linear three-dimensional model according to the connection line, wherein the signal transmitter comprises at least one signal transmitter installed on a navigation tracker, and the signal receiver comprises a plurality of signal receivers installed on a tail end tracer of the mechanical arm and a patient tracer.
In some embodiments, the determining module 403 of fig. 4 determines, according to the spatial position and the mutual positional relationship of the three-dimensional models, whether the linear three-dimensional model passes through other three-dimensional models in the space, when the linear three-dimensional model passes through the other three-dimensional models except the three-dimensional models at two ends of the connecting line, there is a shielding between the corresponding navigation tracker and the tracer in the mechanical arm gesture, otherwise, the mechanical arm gesture is determined to be the non-shielding mechanical arm gesture.
In some embodiments, the screening module 404 of fig. 4 obtains all the non-occluded mechanical arm poses, and sequentially screens the non-occluded mechanical arm poses using the target pose determination conditions in the predetermined pose optimization algorithm to obtain non-occluded mechanical arm poses that satisfy the target pose determination conditions; the target posture judging conditions comprise a first judging condition, a second judging condition, a third judging condition and a fourth judging condition.
In some embodiments, the first judgment condition is used for judging whether the tail end tracer of the mechanical arm can be observed by the navigation tracker according to the connection line between the signal emitter and the signal receiver and the angle of the navigation tracker, and when the judgment can be observed, the second judgment condition is executed; the second judgment condition is used for judging whether the gesture of the non-shielded mechanical arm approaches to the odd dislocation according to the angle value of each joint in the mechanical arm, and executing the third judgment condition when the gesture of the non-shielded mechanical arm is not the odd dislocation; the third judging condition is used for judging whether the tail end of the mechanical arm is below a target point corresponding to the operation position under the non-shielding mechanical arm posture, and executing the fourth judging condition when the tail end of the mechanical arm is not below the target point; and the fourth judgment condition is used for judging whether the target point is between the tail end of the mechanical arm and the mechanical arm base under the non-shielding mechanical arm gesture, and screening out the non-shielding mechanical arm gesture meeting all the judgment conditions when the target point is judged not to be between the tail end of the mechanical arm and the mechanical arm base.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not constitute any limitation on the implementation process of the embodiments of the disclosure.
Fig. 5 is a schematic structural diagram of an electronic device 5 provided in an embodiment of the present disclosure. As shown in fig. 5, the electronic apparatus 5 of this embodiment includes: a processor 501, a memory 502 and a computer program 503 stored in the memory 502 and executable on the processor 501. The steps of the various method embodiments described above are implemented by processor 501 when executing computer program 503. Alternatively, the processor 501, when executing the computer program 503, performs the functions of the modules/units in the above-described apparatus embodiments.
Illustratively, the computer program 503 may be partitioned into one or more modules/units, which are stored in the memory 502 and executed by the processor 501 to complete the present disclosure. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 503 in the electronic device 5.
The electronic device 5 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The electronic device 5 may include, but is not limited to, a processor 501 and a memory 502. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the electronic device 5 and is not meant to be limiting as the electronic device 5 may include more or fewer components than shown, or may combine certain components, or different components, e.g., the electronic device may further include an input-output device, a network access device, a bus, etc.
The processor 501 may be a central processing unit (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 502 may be an internal storage unit of the electronic device 5, for example, a hard disk or a memory of the electronic device 5. The memory 502 may also be an external storage device of the electronic device 5, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device 5. Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device 5. The memory 502 is used to store computer programs and other programs and data required by the electronic device. The memory 502 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/computer device and method may be implemented in other manners. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions of actual implementations, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The above embodiments are merely for illustrating the technical solution of the present disclosure, and are not limiting thereof; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included in the scope of the present disclosure.

Claims (10)

1. The method for determining the target attitude of the mechanical arm is characterized by comprising the following steps of:
acquiring the position of a target object in a robot-assisted orthopaedic surgery platform, determining an initial pose matrix according to the position of the target object, and predicting a target pose matrix generated after the mechanical arm moves in place by using the initial pose matrix;
performing inverse solution operation on the target pose matrixes to obtain multiple mechanical arm poses corresponding to each target pose matrix, wherein the mechanical arm poses represent angle values of all joints under each mechanical arm pose;
determining the mutual position relation between three-dimensional models in the robot-assisted orthopaedic surgery platform according to the initial pose matrix and the mechanical arm poses, establishing a linear three-dimensional model between a navigation tracker and a tracer, and judging whether each mechanical arm pose is a non-shielding mechanical arm pose according to the mutual position relation and the linear three-dimensional model;
And obtaining the non-occlusion mechanical arm gesture, screening the non-occlusion mechanical arm gesture by utilizing a preset target gesture judgment condition, and taking the non-occlusion mechanical arm gesture meeting the target gesture judgment condition as the target gesture.
2. The method of claim 1, wherein the acquiring the position of the target object in the robot-assisted orthopaedic surgical platform, determining an initial pose matrix based on the position of the target object, comprises:
determining the position of the target object by using a robot-assisted orthopaedics navigation system, establishing a coordinate system taking the target object as an origin according to the position of the target object, and generating a preset initial pose matrix according to the position of the target object and the coordinate system;
the target object comprises a navigation tracker, a mechanical arm tail end tracer, a patient tracer, a surgical planning position, a mechanical arm tail end tool and a mechanical arm base, and the coordinate system comprises a navigation tracker coordinate system, a patient tracer coordinate system, a mechanical arm tail end coordinate system and a mechanical arm base coordinate system.
3. The method of claim 2, wherein predicting a target pose matrix generated by the robotic arm after moving into position using the initial pose matrix comprises calculating the target pose matrix using the following formula:
T7=T5·T6·T1 -1 ·T2·T3·T4 -1 ·T6 -1
wherein T7 represents a target pose matrix, T1 represents an initial pose matrix of the robot arm end tracer in a navigation tracker coordinate system, T2 represents an initial pose matrix of the patient tracer in a navigation tracker coordinate system, T3 represents an initial pose matrix of the surgery planning position in the patient tracer coordinate system, T4 represents an initial pose matrix of the robot arm end tool in the robot arm end tracer coordinate system, T5 represents an initial pose matrix of the robot arm end tracer in the robot arm end coordinate system, and T6 represents an initial pose matrix of the robot arm end in the robot arm base coordinate system.
4. The method of claim 1, wherein performing an inverse solution to the target pose matrices to obtain a plurality of robot poses corresponding to each of the target pose matrices comprises:
and taking the target pose matrix and the mechanical arm parameters corresponding to each axis in the mechanical arm as inputs of a mechanical arm inverse solution algorithm, and sequentially carrying out inverse solution calculation on each target pose matrix by using the mechanical arm inverse solution algorithm to obtain angle values of all joints in the mechanical arm corresponding to each target pose matrix, wherein each mechanical arm pose corresponds to a group of angle values of the joints.
5. The method of claim 1, wherein determining a mutual positional relationship between three-dimensional models in the robot-assisted orthopaedic surgical platform and establishing a linear three-dimensional model between a navigation tracker and a tracer based on the initial pose matrix and the robotic arm pose comprises:
loading a preset three-dimensional model in the robot-assisted orthopaedic surgery platform, determining the spatial positions of the mechanical arm models in the three-dimensional model according to the mechanical arm gestures, determining the spatial positions of other three-dimensional models in the three-dimensional model according to the initial pose matrix, and determining the mutual position relation between the three-dimensional models according to the spatial positions;
and establishing a connecting line between a signal transmitter and a signal receiver, and generating the linear three-dimensional model according to the connecting line, wherein the signal transmitter comprises at least one signal transmitter installed on the navigation tracker, and the signal receiver comprises a plurality of signal receivers installed on a tail end tracer of a mechanical arm and a patient tracer.
6. The method of claim 5, wherein determining whether each of the robot poses is an unobstructed robot pose based on the mutual positional relationship and the linear three-dimensional model comprises:
Judging whether the linear three-dimensional model passes through other three-dimensional models in space according to the spatial position of the three-dimensional model and the mutual position relation, when the linear three-dimensional model passes through the other three-dimensional models except the three-dimensional models at the two ends of the connecting line, shielding exists between the navigation tracker and the tracer corresponding to the mechanical arm in the mechanical arm posture, otherwise, determining the mechanical arm posture as the non-shielding mechanical arm posture.
7. The method of claim 1, wherein the obtaining the non-occluded robot arm pose, and the screening the non-occluded robot arm pose using a predetermined target pose determination condition, comprises:
acquiring all the non-occlusion mechanical arm postures, and sequentially screening the non-occlusion mechanical arm postures by utilizing the target posture judgment conditions in a preset posture optimization algorithm to obtain non-occlusion mechanical arm postures meeting the target posture judgment conditions;
the target gesture judgment conditions comprise a first judgment condition, a second judgment condition, a third judgment condition and a fourth judgment condition.
8. The method according to claim 7, wherein the sequentially screening the pose of the non-occlusion robotic arm using the target pose determination condition in a predetermined pose optimization algorithm comprises:
The first judging condition is used for judging whether the tail end tracer of the mechanical arm can be observed by the navigation tracker according to the connecting line between the signal transmitter and the signal receiver and the angle of the navigation tracker, and executing the second judging condition when the tail end tracer of the mechanical arm can be observed;
the second judging condition is used for judging whether the gesture of the non-shielding mechanical arm approaches to the odd ectopic position according to the angle value of each joint in the mechanical arm, and executing the third judging condition when the gesture of the non-shielding mechanical arm is judged to be not the odd ectopic position;
the third judging condition is used for judging whether the tail end of the mechanical arm is below a target point corresponding to the operation position in the non-shielding mechanical arm posture, and executing the fourth judging condition when the tail end of the mechanical arm is not below the target point;
and the fourth judgment condition is used for judging whether the target point is between the tail end of the mechanical arm and the mechanical arm base under the non-shielding mechanical arm posture, and screening out the non-shielding mechanical arm posture meeting all the judgment conditions when the target point is judged not to be between the tail end of the mechanical arm and the mechanical arm base.
9. The device for determining the target posture of the mechanical arm is characterized by comprising the following components:
the acquisition module is configured to acquire the position of a target object in the robot-assisted orthopaedic surgery platform, determine an initial pose matrix according to the position of the target object, and predict a target pose matrix generated after the mechanical arm moves in place by using the initial pose matrix;
the inverse solution module is configured to perform inverse solution operation on the target pose matrixes to obtain a plurality of mechanical arm poses corresponding to each target pose matrix, wherein the mechanical arm poses represent angle values of all joints under each mechanical arm pose;
the judging module is configured to determine the mutual position relation between three-dimensional models in the robot-assisted orthopaedic surgery platform according to the initial pose matrix and the mechanical arm pose, establish a linear three-dimensional model between a navigation tracker and a tracer, and judge whether each mechanical arm pose is a non-shielding mechanical arm pose according to the mutual position relation and the linear three-dimensional model;
the screening module is configured to obtain the non-occlusion mechanical arm gesture, screen the non-occlusion mechanical arm gesture by utilizing a preset target gesture judgment condition, and take the non-occlusion mechanical arm gesture meeting the target gesture judgment condition as the target gesture.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 8.
CN202210513526.4A 2022-05-11 2022-05-11 Method and device for determining target gesture of mechanical arm and storage medium Pending CN117084788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210513526.4A CN117084788A (en) 2022-05-11 2022-05-11 Method and device for determining target gesture of mechanical arm and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210513526.4A CN117084788A (en) 2022-05-11 2022-05-11 Method and device for determining target gesture of mechanical arm and storage medium

Publications (1)

Publication Number Publication Date
CN117084788A true CN117084788A (en) 2023-11-21

Family

ID=88780968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210513526.4A Pending CN117084788A (en) 2022-05-11 2022-05-11 Method and device for determining target gesture of mechanical arm and storage medium

Country Status (1)

Country Link
CN (1) CN117084788A (en)

Similar Documents

Publication Publication Date Title
KR20160070006A (en) Collision avoidance method, control device, and program
CN111844130B (en) Method and device for correcting pose of robot end tool
CA2842441C (en) Creating ergonomic manikin postures and controlling computer-aided design environments using natural user interfaces
CN106249883B (en) A kind of data processing method and electronic equipment
CN107618032B (en) Method and device for controlling robot movement of robot based on second trajectory
CN112276914A (en) Industrial robot based on AR technology and man-machine interaction method thereof
CN113119104A (en) Mechanical arm control method, mechanical arm control device, computing equipment and system
CN113070877B (en) Variable attitude mapping method for seven-axis mechanical arm visual teaching
CN111113428B (en) Robot control method, robot control device and terminal equipment
CN113601510A (en) Robot movement control method, device, system and equipment based on binocular vision
CN115703227A (en) Robot control method, robot, and computer-readable storage medium
CN116019562A (en) Robot control system and method
CN117562674A (en) Surgical robot and method performed by the same
CN117084788A (en) Method and device for determining target gesture of mechanical arm and storage medium
CN114683288B (en) Robot display and control method and device and electronic equipment
CN113119131B (en) Robot control method and device, computer readable storage medium and processor
US11325247B2 (en) Robotic arm control method and apparatus and terminal device using the same
CN115869069A (en) Surgical robot control method, device, equipment, medium and system
WO2020140048A1 (en) Kinematics of wristed laparoscopic instruments
CN111870346A (en) Space registration method and device for robot and image equipment and electronic equipment
CN117562661B (en) Method for detecting collision of mechanical arm and related product
Ni et al. Research on Mobile User Interface for Robot Arm Remote Control in Industrial Application
CN114211493B (en) Remote control system and method for mechanical arm
JP7441716B2 (en) Work system and work control device
JP7324935B2 (en) ROBOT CONTROL SYSTEM, ROBOT CONTROL METHOD, AND ROBOT CONTROL PROGRAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination