CN114041828A - Ultrasonic scanning control method, robot and storage medium - Google Patents

Ultrasonic scanning control method, robot and storage medium Download PDF

Info

Publication number
CN114041828A
CN114041828A CN202210035120.XA CN202210035120A CN114041828A CN 114041828 A CN114041828 A CN 114041828A CN 202210035120 A CN202210035120 A CN 202210035120A CN 114041828 A CN114041828 A CN 114041828A
Authority
CN
China
Prior art keywords
robot
preset
parameters
actual
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210035120.XA
Other languages
Chinese (zh)
Other versions
CN114041828B (en
Inventor
谈继勇
从敬德
孙熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hanwei Intelligent Medical Technology Co ltd
Original Assignee
Shenzhen Hanwei Intelligent Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hanwei Intelligent Medical Technology Co ltd filed Critical Shenzhen Hanwei Intelligent Medical Technology Co ltd
Priority to CN202210035120.XA priority Critical patent/CN114041828B/en
Publication of CN114041828A publication Critical patent/CN114041828A/en
Application granted granted Critical
Publication of CN114041828B publication Critical patent/CN114041828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0825Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the breast, e.g. mammography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4444Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for

Abstract

The invention discloses an ultrasonic scanning control method, a robot and a storage medium, which are applied to the field of robot control. The method comprises the steps that in the process that a robot scans a target area, actual running state parameters of the robot are obtained; determining a preset operation state parameter corresponding to the actual operation state parameter, and determining a target joint control parameter of the robot according to the actual operation state parameter and the preset operation state parameter; the technical scheme for controlling the robot to execute the scanning action corresponding to the target joint control parameter solves the problem that the joint control parameter of the robot cannot be accurately determined when the scanning scene changes, and realizes the constant-speed and constant-pressure control of the robot through the technical scheme of the invention, thereby improving the quality of the ultrasonic image.

Description

Ultrasonic scanning control method, robot and storage medium
Technical Field
The invention relates to the technical field of robot control, in particular to an ultrasonic scanning control method, a robot and a storage medium.
Background
In the process of scanning the mammary gland, the constant pressure and the constant scanning speed between the ultrasonic probe and the skin are required to be kept. In the related technology, in the process of ultrasonic scanning control, pressure is detected only through a pressure sensor, joint control parameters corresponding to the pressure are calculated according to a fixed algorithm, and then the robot is controlled to work through the joint control parameters. However, when the scanning scene changes, the joint control parameters of the robot cannot be determined, and the robot cannot realize constant-speed and constant-pressure scanning actions due to the fact that the joint control parameters of the robot cannot be determined, so that the quality of finally obtained ultrasonic image data is poor.
Disclosure of Invention
The embodiment of the invention provides an ultrasonic scanning control method, a robot and a storage medium, and aims to solve the problem that joint control parameters of the robot cannot be determined when a scanning scene changes.
The embodiment of the invention provides an ultrasonic scanning control method, which comprises the following steps:
acquiring actual running state parameters of the robot in the process of scanning a target area by the robot;
determining a preset operation state parameter corresponding to the actual operation state parameter, and determining a target joint control parameter of the robot according to the actual operation state parameter and the preset operation state parameter;
and controlling the robot to execute scanning actions corresponding to the target joint control parameters.
In one embodiment, the actual operating state parameters include spatial position, linear velocity, attitude, and contact normal force of the ultrasonic probe; the step of determining a preset operation state parameter corresponding to the actual operation state parameter and determining a target joint control parameter of the robot according to the actual operation state parameter and the preset operation state parameter comprises:
respectively comparing the spatial position, the linear velocity, the posture and the contact normal force of the ultrasonic probe with the operation state parameters in a preset database;
when running state parameters matched with the spatial position, the linear speed, the posture and the contact normal of the ultrasonic probe exist in the preset database, determining the running state parameters as preset running state parameters corresponding to the actual running state parameters;
and acquiring joint control parameters related to the preset running state parameters, and determining the joint control parameters as target joint control parameters of the robot.
In an embodiment, after the step of determining the target joint control parameter of the robot according to the actual operation state parameter and the preset operation state parameter, the method further includes:
determining a speed reward value and a pressure reward value according to the actual running state parameter and the preset running state parameter;
and correcting the target joint control parameters according to the speed reward value and the pressure reward value.
In an embodiment, the preset operation state parameters include: the method comprises the following steps of (1) presetting a spatial position, a linear velocity, a posture and a contact normal force of an ultrasonic probe; the step of determining a speed reward value and a pressure reward value according to the actual operation state parameter and the preset operation state parameter comprises the following steps:
acquiring a linear velocity weight coefficient and a contact normal force weight coefficient;
obtaining a velocity reward value according to the product of the modulus of the difference value of the linear velocity and the preset linear velocity and the linear velocity weight coefficient;
and obtaining a pressure reward value according to the product of the modulus of the difference value of the contact normal force and the preset contact normal force and the weight coefficient of the contact normal force.
In an embodiment, the step of acquiring the actual operating state parameter of the robot during the process of scanning the target area by the robot includes:
acquiring a preset running path of the robot, wherein the preset running path comprises at least one preset track sequence point, and each preset track sequence point has a corresponding preset running state parameter;
controlling the robot to scan a target area based on the preset running path;
and determining actual running state parameters corresponding to each preset track sequence point when the robot scans the target area based on the preset running path.
In an embodiment, the acquiring the actual operation state parameters of the robot further includes:
determining the spatial position, linear velocity and attitude of an ultrasonic probe of the robot based on a forward kinematic transformation mode;
determining a contact normal force of the robot based on a six-dimensional force sensor;
and determining the spatial position of the ultrasonic probe, the linear velocity, the posture and the contact normal force as the actual running state parameters of the robot.
In an embodiment, before the step of acquiring the actual operating state parameter of the robot in the process of scanning the target area by the robot, the method further includes:
acquiring actual reference state parameters corresponding to the virtual robot when the virtual robot scans on the virtual flexible body based on a preset reference track;
obtaining joint control parameters according to the actual reference state parameters and preset reference state parameters corresponding to the preset reference track;
and associating the joint control parameters, the actual reference state parameters and the preset reference state parameters.
In an embodiment, after the step of obtaining a joint control parameter according to the actual reference state parameter and the preset reference state parameter, and associating the joint control parameter, the actual reference state parameter, and the preset reference state parameter, the method further includes:
and correcting the joint control parameters by adopting the rigidity parameters and the damping parameters of the virtual flexible body to obtain the corrected joint control parameters.
Further, to achieve the above object, the present invention also provides a robot comprising: the ultrasonic scanning control method comprises a memory, a processor and an ultrasonic scanning control program which is stored on the memory and can run on the processor, wherein the steps of the ultrasonic scanning control method are realized when the ultrasonic scanning control program is executed by the processor.
In addition, in order to achieve the above object, the present invention further provides a storage medium storing an ultrasound scanning control program, which when executed by a processor implements the steps of the ultrasound scanning control method described above.
According to the technical scheme of the ultrasonic scanning control method, the robot and the storage medium, the scanning strategy is deployed at the user terminal, the robot scans a target area based on a preset operation path, and actual operation state parameters of the robot in the scanning process are acquired in real time through a sensor installed on the robot. And transmitting the actual operation state parameter to the user terminal. And at the user terminal, the actual running state parameters are processed by the scanning strategy, and then the target joint control parameter joint angle of the robot can be determined. And transmitting the target joint control parameters to a robot controller, further controlling the robot to continue moving along the preset running path, and simultaneously adjusting the pose of the tail end probe in real time through the target joint control parameters to meet scanning speed and scanning pressure, so that the problem that the joint control parameters of the robot cannot be determined when a scanning scene changes is solved, and constant-pressure and constant-speed control of the robot is realized.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of the ultrasonic scanning control method of the present invention;
FIG. 3 is a schematic flow chart of a second embodiment of the ultrasonic scanning control method of the present invention;
FIG. 4 is a schematic flow chart of a constant speed and constant pressure control strategy algorithm of the present invention;
FIG. 5 is a schematic diagram of a modeling process for parameterizing a virtual flexible body according to the present invention;
FIG. 6 is a schematic diagram illustrating the calculation and visualization of the deformation and interaction force of the virtual flexible body according to the present invention;
fig. 7 is a schematic view of a virtual robot scanning simulation scene according to the present invention.
The objects, features, and advantages of the present invention will be further explained with reference to the accompanying drawings, which are an illustration of one embodiment, and not an entirety of the invention.
Detailed Description
For a better understanding of the above technical solutions, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a hardware operating environment according to an embodiment of the present invention.
Fig. 1 is a schematic structural diagram of a hardware operating environment of a robot.
As shown in fig. 1, the robot may include: a processor 1001, such as a CPU, a memory 1005, a user interface 1003, a network interface 1004, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration of the robot shown in fig. 1 is not intended to be limiting and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and an ultrasound scanning control program. Wherein the operating system is a program for managing and controlling hardware and software resources of the robot, an ultrasonic scanning control program and other software or programs.
In the robot shown in fig. 1, the user interface 1003 is mainly used for connecting a terminal, and performing data communication with the terminal; the network interface 1004 is mainly used for the background server and performs data communication with the background server; the processor 1001 may be used to invoke an ultrasound scanning control program stored in the memory 1005.
In the present embodiment, the robot includes: a memory 1005, a processor 1001 and an ultrasound scanning control program stored on the memory and executable on the processor, wherein:
when the processor 1001 calls the ultrasound scanning control program stored in the memory 1005, the following operations are performed:
acquiring actual running state parameters of the robot in the process of scanning a target area by the robot;
determining a preset operation state parameter corresponding to the actual operation state parameter, and determining a target joint control parameter of the robot according to the actual operation state parameter and the preset operation state parameter;
and controlling the robot to execute scanning actions corresponding to the target joint control parameters.
When the processor 1001 calls the ultrasound scanning control program stored in the memory 1005, the following operations are also performed:
respectively comparing the spatial position, the linear velocity, the posture and the contact normal force of the ultrasonic probe with the operation state parameters in a preset database;
when running state parameters matched with the spatial position, the linear speed, the posture and the contact normal of the ultrasonic probe exist in the preset database, determining the running state parameters as preset running state parameters corresponding to the actual running state parameters;
and acquiring joint control parameters related to the preset running state parameters, and determining the joint control parameters as target joint control parameters of the robot.
When the processor 1001 calls the ultrasound scanning control program stored in the memory 1005, the following operations are also performed:
determining a speed reward value and a pressure reward value according to the actual running state parameter and the preset running state parameter;
and correcting the target joint control parameters according to the speed reward value and the pressure reward value.
When the processor 1001 calls the ultrasound scanning control program stored in the memory 1005, the following operations are also performed:
acquiring a linear velocity weight coefficient and a contact normal force weight coefficient;
obtaining a velocity reward value according to the product of the modulus of the difference value of the linear velocity and the preset linear velocity and the linear velocity weight coefficient;
and obtaining a pressure reward value according to the product of the modulus of the difference value of the contact normal force and the preset contact normal force and the weight coefficient of the contact normal force.
When the processor 1001 calls the ultrasound scanning control program stored in the memory 1005, the following operations are also performed:
acquiring a preset running path of the robot, wherein the preset running path comprises at least one preset track sequence point, and each preset track sequence point has a corresponding preset running state parameter;
controlling the robot to scan a target area based on the preset running path;
and determining actual running state parameters corresponding to each preset track sequence point when the robot scans the target area based on the preset running path.
When the processor 1001 calls the ultrasound scanning control program stored in the memory 1005, the following operations are also performed:
determining the spatial position, linear velocity and attitude of an ultrasonic probe of the robot based on a forward kinematic transformation mode;
determining a contact normal force of the robot based on a six-dimensional force sensor;
and determining the spatial position of the ultrasonic probe, the linear velocity, the posture and the contact normal force as the actual running state parameters of the robot.
When the processor 1001 calls the ultrasound scanning control program stored in the memory 1005, the following operations are also performed:
acquiring actual reference state parameters corresponding to the virtual robot when the virtual robot scans on the virtual flexible body based on a preset reference track;
obtaining joint control parameters according to the actual reference state parameters and preset reference state parameters corresponding to the preset reference track;
and associating the joint control parameters, the actual reference state parameters and the preset reference state parameters.
When the processor 1001 calls the ultrasound scanning control program stored in the memory 1005, the following operations are also performed:
and correcting the joint control parameters by adopting the rigidity parameters and the damping parameters of the virtual flexible body to obtain the corrected joint control parameters.
The following will be discussed in terms of specific embodiments:
the first embodiment:
as shown in fig. 2, in a first embodiment of the present invention, an ultrasound scanning control method of the present invention includes the steps of:
step S110, acquiring actual running state parameters of the robot in the process of scanning a target area by the robot;
step S120, determining a preset operation state parameter corresponding to the actual operation state parameter, and determining a target joint control parameter of the robot according to the actual operation state parameter and the preset operation state parameter;
and S130, controlling the robot to execute scanning actions corresponding to the target joint control parameters.
In the present embodiment, the problem that the joint control parameters of the robot cannot be determined when the scanning scene changes is solved. The invention provides an ultrasonic scanning control method. According to the method, the actual running state parameters of the robot are obtained in the process of scanning a target area by the robot. And inputting the actual operation state parameters into a preset neural network model, wherein a plurality of control strategies exist in the preset neural network model. And determining a control strategy corresponding to the actual operation state parameter through the preset neural network model. Specifically, the preset operation state parameters corresponding to the actual operation state parameters are determined, and the preset operation state parameters corresponding to the matched actual operation state parameters are correspondingly changed when the actual operation state parameters are changed, so that the method is suitable for different scanning scenes. And determining target joint control parameters of the robot according to the actual running state parameters and the preset running state parameters. And after the target control parameters are determined, controlling the robot to execute scanning actions corresponding to the target control parameters, thereby realizing the effect of constant-speed and constant-pressure scanning on a target area.
Specifically, the scanning strategy is deployed at the user terminal, the robot scans the target area based on the preset operation path, and the actual operation state parameters of the robot in the scanning process are acquired in real time through the sensor installed on the robot. And transmitting the actual operation state parameter to the user terminal. And at the user terminal, the actual running state parameters are processed by the scanning strategy, and then target joint control parameters (joint angles) of the robot can be determined. And transmitting the target joint control parameters to a robot controller, further controlling the robot to continuously track the preset running path for movement, and simultaneously adjusting the pose of the tail end probe in real time to meet the scanning speed and the scanning pressure.
In this embodiment, the target region is a region to be scanned, and the region to be scanned may be a human body part, such as a breast part, a bladder part, and the like. The actual running state parameters are state parameters output by a sensor in real time when the robot scans a target area, wherein the actual running state parameters comprise: spatial position, attitude, linear velocity and contact normal force of the ultrasonic probe tip. Wherein the ultrasonic probe is disposed on the robot. The spatial position of the end of the ultrasonic probe can be expressed as Pos = [ x, y, z ], and can be obtained through forward kinematic transformation of a robot. The attitude is a spatial attitude at the tail end of the ultrasonic probe, can be represented by a quaternion Quat = [ w, x, y, z ], and can be obtained through forward kinematics transformation of the robot. The linear velocity is the linear velocity at the tail end of the ultrasonic probe, the linear velocity can be expressed as Vel = [ v ], a modulus of a velocity vector can be taken as a scalar, and the linear velocity can be obtained through forward kinematics transformation and difference of the robot. The contact normal Force is a contact normal Force of the tail end of the ultrasonic probe in contact with the breast, and can be represented as Force = [ fn ], the contact normal Force is obtained through a six-dimensional Force sensor installed on the robot, the six-dimensional Force sensor is installed on the wrist of the robot, and the Force in the Z-axis direction collected by the six-dimensional Force sensor is determined as the contact normal Force. And determining the actual running state parameters after acquiring the spatial position, the attitude, the linear velocity and the contact normal force of the tail end of the ultrasonic probe.
Optionally, a preset operation path of the robot is obtained, and the robot is controlled to scan the breast based on the preset operation path. The preset operation path comprises at least one preset track sequence point, and each preset track sequence point has a corresponding preset operation state parameter; and the robot carries out operation scanning based on each preset track sequence point. And when the robot scans based on each preset track sequence point, each preset track sequence point has a corresponding actual running state parameter. The actual running state parameters are parameters actually acquired by a sensor of the robot. The preset operation state parameters include: the method comprises the following steps of (1) presetting a spatial position, a linear velocity, a posture and a contact normal force of an ultrasonic probe; the preset posture is the posture of the tail end of the ultrasonic probe, the preset spatial position is the spatial position of the tail end of the ultrasonic probe, and the preset posture and the preset spatial position are from scanning tracks planned by chest scanning point clouds and specifically comprise the spatial position of the tail end of the ultrasonic probe and spatial rotation quaternion of the tail end of the ultrasonic probe. The preset linear velocity is the linear velocity of the tail end of the ultrasonic probe, the preset contact normal force is the contact normal force of the ultrasonic probe contacting with the mammary gland, and the preset linear velocity and the preset contact normal force can be corrected and determined through the working experience of doctors in related fields.
In this embodiment, after the actual operation state parameters of the robot are obtained, the actual operation state parameters are input into a preset neural network model to obtain target joint control parameters of the robot. Wherein, the preset neural network model adopts a PPO (proximity Policy Optimization) algorithm. The core of the PPO algorithm is that in each step, the action value of the next step is calculated according to the state observation value fed back by the simulation environment, and the reward value generated by the action of the previous step is used as the state observation value, the action value and the reward to make a training sample. Specifically, in the preset neural network model, a preset operation state parameter corresponding to the actual operation state parameter is determined, and a target joint control parameter of the robot is determined according to the actual operation state parameter and the preset operation state parameter.
Optionally, the spatial position, the linear velocity, the posture and the contact normal force of the ultrasonic probe may be respectively compared with different operation state parameters in a preset database, and when an operation state parameter matching the spatial position, the linear velocity, the posture and the contact normal direction of the ultrasonic probe exists in the preset database, the operation state parameter is determined as a preset operation state parameter corresponding to the actual operation state parameter. There is an associated joint control parameter for each preset operating state parameter, and the association between them has been determined during the model training phase. Determining the joint control parameters as target joint control parameters of the robot.
Optionally, after the target joint control parameter of the robot is determined, a speed reward value and a pressure reward value may be determined according to the actual operation state parameter and the preset operation state parameter, and the target joint control parameter is corrected according to the speed reward value and the pressure reward value.
Optionally, in order to obtain a high-quality ultrasonic image, a constant scanning speed of the robot ultrasonic probe and a constant contact normal pressure with the skin need to be ensured. Two reward functions are constructed for the above requirements, as shown in the following formula. For the requirement of constant speed, the mode of the deviation of the tail end speed of the robot ultrasonic probe and the reference speed is tracked in real time, and meanwhile, the deviation is multiplied by a given linear speed weight coefficient CvelA velocity award value is obtained. For the constant pressure requirement, a mode of the deviation of the normal component of the contact force of the tail end of the robot ultrasonic probe, which is vertical to the skin, and the reference pressure is obtained in real time, and the deviation is multiplied by a given contact normal force weight coefficient CforceTo derive a stress award value.
Rvel=|Vscan-Vref|*Cvel
Rforce=|Fscan-Fref|*Cforce
Wherein, the VscanIs a linear velocity, said VrefFor a predetermined linear velocity, CvelIs linear velocity weight coefficient, Fcontact is contact normal force, FrefFor a predetermined contact normal force, CforceFor contact normal force weight coefficient, said RvelFor speed reward value, said RforceIs a pressure reward value.
In this embodiment, after obtaining the target joint control parameter of the robot, the robot is controlled to execute scanning correspondence corresponding to the target joint control parameter. The target joint control parameter is a joint control angle, the actual operation state parameter and the preset operation state parameter are associated in a preset neural network model training stage, and the corresponding joint control angle can be determined when the corresponding actual operation state parameter and the corresponding preset operation state parameter are determined. And sending the joint control angle to the robot so that the robot scans at the joint control angle.
According to the technical scheme, the scanning strategy is deployed at the user terminal, the robot scans the target area based on the preset operation path, and the actual operation state parameters of the robot in the scanning process are acquired in real time through the sensor installed on the robot. And transmitting the actual operation state parameter to the user terminal. And at the user terminal, the actual running state parameters are processed by the scanning strategy, and then the target joint control parameter joint angle of the robot can be determined. And transmitting the target joint control parameters to a robot controller, further controlling the robot to continue moving along the preset running path, and simultaneously adjusting the pose of the tail end probe in real time through the target joint control parameters to meet scanning speed and scanning pressure, so that the problem that the joint control parameters of the robot cannot be determined when a scanning scene changes is solved, and constant-pressure and constant-speed control of the robot is realized.
Second embodiment:
referring to fig. 3, fig. 3 includes the refinement steps of the first embodiment of the present invention prior to step S110. Before step S110, training a preset neural network model, constructing a virtual robot, and constructing a virtual flexible body are further included. Specifically, before step S110, the method further includes:
step S210, acquiring an actual reference state parameter corresponding to the virtual robot when the virtual robot scans on the virtual flexible body based on a preset reference track;
step S220, obtaining joint control parameters according to the actual reference state parameters and preset reference state parameters corresponding to the preset reference track;
step S230, associating the joint control parameter, the actual reference state parameter and the preset reference state parameter.
In this embodiment, the robot breast ultrasound constant pressure constant speed scanning comprises two parts: training and testing of the virtual robot and algorithm migration application.
In a virtual robot training and testing stage, in a constructed breast scanning simulation scene, an ultrasonic probe of a robot needs to be scanned according to a given preset reference track, and each preset reference track sequence point has a corresponding preset reference state parameter. The preset reference state parameters include: the spatial position Pos of the ultrasonic tail end probe, the posture Quat of the ultrasonic probe, the linear velocity Vel of the tail end of the ultrasonic probe and the normal Force of the contact of the ultrasonic probe and the virtual flexible body. Meanwhile, the actual reference state parameters in the robot motion process are acquired in real time through the simulator, and the parameters included in the actual reference state parameters correspond to the preset reference state parameters. The states of the two groups of robots, namely the actual reference state parameters and the preset reference state parameters are used as the input of a neural network for reinforcement learning, the virtual robot is trained by formulating a reasonable reward function, so that a preset neural network model is obtained, and the preset neural network model can obtain better control capability to meet the requirements of constant speed and constant pressure.
In the algorithm migration application stage, the actual operation state parameters of the robot are input into a trained preset neural network model, any preset operation path is given, the preset neural network model outputs target joint control parameters of the robot, and the constant-speed and constant-pressure requirements in the scanning motion process of the robot are controlled through the target joint control parameters.
According to the technical scheme, the method comprises the steps that corresponding actual reference state parameters of the virtual robot are obtained when the virtual robot scans on the virtual flexible body based on the preset reference track; obtaining joint control parameters according to the actual reference state parameters and preset reference state parameters corresponding to the preset reference track; the joint control parameters, the actual reference state parameters and the preset reference state parameters are associated, so that the training of the control algorithm is realized.
In other embodiments, fig. 4 is a schematic flow chart of the constant-speed constant-pressure control strategy algorithm of the present invention, and the whole flow chart can be divided into 5 modules: the method comprises the following steps of simulation scene construction, flexible deformation body operation simulation, target task reward function design and optimization, reinforcement learning algorithm simulation training and algorithm migration application. The details of each module are as follows:
firstly, establishing a simulation scene.
1) Robot model configuration: and selecting a proper robot according to the requirements of the target scene, and then configuring the virtual model and the controller of the robot in the simulator, so that the motion control of the robot can be completed in the simulator. Meanwhile, corresponding sensor accessories are required to be configured, and the sensor accessories comprise a common six-dimensional force sensor, a two-dimensional camera and the like.
2) Flexible body parametric modeling: as shown in fig. 5, according to the breast scanning scene, the object to be operated is a flexible body, so that modeling simulation is performed on the flexible body. The mass spring damping structure is constructed by discretizing the imported model file to obtain triangular patches of the model, connecting the corner points of all the triangular patches of the model with the mass center body of the model, and simulating the flexible characteristics of an object by changing the spring and damping parameters.
And secondly, simulating the operation of the flexible deformation body.
1) Calculating the deformation and the interaction force of the flexible body: as shown in fig. 6, according to the flexible body modeling method, the position and posture of the flexible body can be changed by changing the spatial position and posture of the mass center body of the flexible body, the deformation of the flexible body is realized by changing the displacement of the corner points of the triangular surface patch of the flexible surface, and the corresponding corner point deformation force is calculated based on the corner point displacement and the structural parameters, so that the control of the flexible body is completed. Meanwhile, the flexibility and the flexibility of the flexible body can be adjusted by changing the structural parameters corresponding to the damping of the mass spring.
2) Visualization of the interoperation of the flexible body: the flexible body surface mapping skin is rendered by constructing a renderer, and then texturing and subdividing are carried out by using bicubic interpolation, so that the flexible body skin is more vivid. And simultaneously updating and calculating the pose and the stress condition of the flexible body and performing visual rendering.
Thirdly, designing and optimizing a target task reward function.
1) Constant-speed constant-pressure scanning reward design: in order to obtain a high-quality ultrasonic image, the constant scanning speed of the robot ultrasonic probe and the constant contact normal pressure with the skin need to be ensured. To the aboveOn demand, two reward functions are constructed, as shown below. For the requirement of constant speed, the mode of the deviation of the tail end speed of the robot ultrasonic probe and the reference speed is tracked in real time, and a given weight coefficient C is multiplied at the same timevelA reward value for the speed of the scan is obtained. For the constant pressure requirement, acquiring a mode of the deviation of the normal component of the contact force of the tail end of the ultrasonic probe of the virtual robot, which is vertical to the skin, and the reference pressure in real time, and multiplying the mode by a given weight coefficient CforceTo derive a reward value for scanning pressure.
Rvel=|Vscan-Vref|*Cvel
Rforce=|Fscan-Fref|*Cforce
2) Constant-speed constant-pressure scanning reward optimization: within a certain experience range, the rigidity and the damping of the flexible body are randomly adjusted in real time, so that the adaptability of the adjusting virtual robot to different individual rigidity damping parameter differences is trained, and the virtual robot can better achieve a constant-pressure scanning target.
Fourthly, performing simulation training on the reinforcement learning algorithm.
1) Virtual robot simulation training: as shown in fig. 7, in the simulation environment of the constructed target scene, the virtual robot searches for an optimal scanning strategy by continuous exploration to meet the scanning target with constant speed and constant pressure. In simulation, after each Time Step (Time Step) is executed by the virtual robot, the executed action is evaluated according to the environmental state transition to obtain an incentive reward Rt, and the goal of simulation training is to find a group of neural network parameters so that the sum of the incentive rewards of the actions at all the Time steps in the scanning task process is the maximum and tends to be stable under a certain strategy, namely:
Figure 763710DEST_PATH_IMAGE001
theoretically, an optimal strategy can meet the requirements of all scanning targets, but in practice, only a suboptimal solution close to the optimal strategy can be found, and the suboptimal solution also meets the requirements of constant speed and constant pressure to a great extent so as to be practically applied. Essentially, what the virtual robot learns in the target simulation environment is the gain of the osc (operational Space controller) controller of the robot, which is used for the robot to adjust the spatial position and posture of the probe end in the scanning process to meet the requirements of constant-speed and constant-pressure scanning.
2) Optimizing a simulation strategy of the virtual robot: after a suboptimal scanning strategy is preliminarily obtained, optimization iteration needs to be carried out on the strategy. First, the relevant hyper-parameters in the neural network are adjusted to let the network converge faster. Secondly, the rigidity and the damping of the flexible body are dynamically adjusted at each time step to enhance the adaptability of the virtual robot to the environment change. And finally, the weight coefficient of the reward target is adjusted, so that the actual scanning process is more reasonable.
And fifthly, migrating the application of the algorithm.
Migration of breast scanning scene: in a simulation environment, because the simulator cannot completely simulate a real application scene due to the inherent defects of the physical engine, the simulation scene and the actual application scene are necessarily different, such as environment visual rendering, dynamic model parameters of physical collision, sensor physical noise and the like. Therefore, it is necessary to migrate (Sim-to-Real Transfer) the simulation scene to the actual application, that is, to expand and correct the data distribution of the simulation scene so that the data distribution is as close to the Real scene data distribution as possible. In a breast scanning scene, proper noise needs to be added to the six-dimensional force sensor, proper damping needs to be added to each joint of the virtual robot, and a final scanning strategy is obtained by continuing training and optimizing aiming at new simulation data distribution.
According to the technical scheme, modeling and operation interactive simulation of the virtual flexible deformation body are achieved; the virtual robot reward function design and optimization oriented to the operation task, and a robot constant-pressure constant-speed scanning control strategy based on reinforcement learning is established.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein.
Based on the same inventive concept, an embodiment of the present invention further provides a storage medium, where an ultrasound scanning control program is stored, and when the ultrasound scanning control program is executed by a processor, the steps of the ultrasound scanning control described above are implemented, and the same technical effects can be achieved, and are not described herein again to avoid repetition.
Since the storage medium provided in the embodiment of the present invention is a storage medium used for implementing the method in the embodiment of the present invention, based on the method described in the embodiment of the present invention, a person skilled in the art can understand the specific structure and the deformation of the storage medium, and thus details are not described herein again. Any storage medium used in the methods of the embodiments of the present invention is intended to be within the scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. An ultrasonic scanning control method, characterized by comprising:
acquiring actual running state parameters of the robot in the process of scanning a target area by the robot;
determining a preset operation state parameter corresponding to the actual operation state parameter, and determining a target joint control parameter of the robot according to the actual operation state parameter and the preset operation state parameter;
and controlling the robot to execute scanning actions corresponding to the target joint control parameters.
2. The ultrasonic scanning control method of claim 1, wherein the actual operating state parameters include spatial position, linear velocity, attitude and contact normal force of the ultrasonic probe; the step of determining a preset operation state parameter corresponding to the actual operation state parameter and determining a target joint control parameter of the robot according to the actual operation state parameter and the preset operation state parameter comprises:
respectively comparing the spatial position, the linear velocity, the posture and the contact normal force of the ultrasonic probe with the operation state parameters in a preset database;
when running state parameters matched with the spatial position, the linear speed, the posture and the contact normal of the ultrasonic probe exist in the preset database, determining the running state parameters as preset running state parameters corresponding to the actual running state parameters;
and acquiring joint control parameters related to the preset running state parameters, and determining the joint control parameters as target joint control parameters of the robot.
3. The ultrasonic scanning control method of claim 2, wherein after the step of determining the target joint control parameters of the robot according to the actual operating state parameters and the preset operating state parameters, further comprising:
determining a speed reward value and a pressure reward value according to the actual running state parameter and the preset running state parameter;
and correcting the target joint control parameters according to the speed reward value and the pressure reward value.
4. The ultrasonic scanning control method of claim 3, wherein the preset operating state parameters comprise: the method comprises the following steps of (1) presetting a spatial position, a linear velocity, a posture and a contact normal force of an ultrasonic probe; the step of determining a speed reward value and a pressure reward value according to the actual operation state parameter and the preset operation state parameter comprises the following steps:
acquiring a linear velocity weight coefficient and a contact normal force weight coefficient;
obtaining a velocity reward value according to the product of the modulus of the difference value of the linear velocity and the preset linear velocity and the linear velocity weight coefficient;
and obtaining a pressure reward value according to the product of the modulus of the difference value of the contact normal force and the preset contact normal force and the weight coefficient of the contact normal force.
5. The ultrasonic scanning control method of claim 1, wherein the step of acquiring the actual operating state parameters of the robot during the process of scanning the target area by the robot comprises the steps of:
acquiring a preset running path of the robot, wherein the preset running path comprises at least one preset track sequence point, and each preset track sequence point has a corresponding preset running state parameter;
controlling the robot to scan a target area based on the preset running path;
and determining actual running state parameters corresponding to each preset track sequence point when the robot scans the target area based on the preset running path.
6. The ultrasonic scanning control method of claim 1, wherein the acquiring actual operating state parameters of the robot further comprises:
determining the spatial position, linear velocity and attitude of an ultrasonic probe of the robot based on a forward kinematic transformation mode;
determining a contact normal force of the robot based on a six-dimensional force sensor;
and determining the spatial position of the ultrasonic probe, the linear velocity, the posture and the contact normal force as the actual running state parameters of the robot.
7. The ultrasonic scanning control method of claim 1, wherein before the step of obtaining the actual operating state parameters of the robot during the scanning of the target area by the robot, the method further comprises:
acquiring actual reference state parameters corresponding to the virtual robot when the virtual robot scans on the virtual flexible body based on a preset reference track;
obtaining joint control parameters according to the actual reference state parameters and preset reference state parameters corresponding to the preset reference track;
and associating the joint control parameters, the actual reference state parameters and the preset reference state parameters.
8. The ultrasound scanning control method of claim 7, wherein after the step of obtaining the joint control parameter according to the actual reference state parameter and the preset reference state parameter and associating the joint control parameter, the actual reference state parameter and the preset reference state parameter, further comprising:
and correcting the joint control parameters by adopting the rigidity parameters and the damping parameters of the virtual flexible body to obtain the corrected joint control parameters.
9. A robot, characterized in that the robot comprises: memory, a processor and an ultrasound scanning control program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the ultrasound scanning control method of any one of claims 1-8.
10. A storage medium storing an ultrasound scanning control program which, when executed by a processor, implements the steps of the ultrasound scanning control method of any one of claims 1-8.
CN202210035120.XA 2022-01-13 2022-01-13 Ultrasonic scanning control method, robot and storage medium Active CN114041828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210035120.XA CN114041828B (en) 2022-01-13 2022-01-13 Ultrasonic scanning control method, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210035120.XA CN114041828B (en) 2022-01-13 2022-01-13 Ultrasonic scanning control method, robot and storage medium

Publications (2)

Publication Number Publication Date
CN114041828A true CN114041828A (en) 2022-02-15
CN114041828B CN114041828B (en) 2022-04-29

Family

ID=80196437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210035120.XA Active CN114041828B (en) 2022-01-13 2022-01-13 Ultrasonic scanning control method, robot and storage medium

Country Status (1)

Country Link
CN (1) CN114041828B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114748101A (en) * 2022-06-15 2022-07-15 深圳瀚维智能医疗科技有限公司 Ultrasonic scanning control method, system and computer readable storage medium
CN116077089A (en) * 2023-02-28 2023-05-09 北京智源人工智能研究院 Multimode safety interaction method and device for ultrasonic scanning robot

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55113952A (en) * 1979-02-15 1980-09-02 Westinghouse Electric Corp Ultrasonic tester
US20070021738A1 (en) * 2005-06-06 2007-01-25 Intuitive Surgical Inc. Laparoscopic ultrasound robotic surgical system
WO2010065786A1 (en) * 2008-12-03 2010-06-10 St. Jude Medical, Atrial Fibrillation Division, Inc. System and method for determining the positioin of the tip of a medical catheter within the body of a patient
WO2012055498A1 (en) * 2010-10-26 2012-05-03 Osio Universitetssykehus Hf Method for myocardial segment work analysis
WO2016026437A1 (en) * 2014-08-19 2016-02-25 Chen Chieh Hsiao Method and system of determining probe position in surgical site
JP5920746B1 (en) * 2015-01-08 2016-05-18 学校法人早稲田大学 Puncture support system
US20180250078A1 (en) * 2015-09-10 2018-09-06 Xact Robotics Ltd. Systems and methods for guiding the insertion of a medical tool
EP3505106A1 (en) * 2017-12-28 2019-07-03 Ethicon LLC Estimating state of ultrasonic end effector and control system therefor
CN110488745A (en) * 2019-07-23 2019-11-22 上海交通大学 A kind of human body automatic ultrasonic scanning machine people, controller and control method
US20200046169A1 (en) * 2019-09-10 2020-02-13 Lg Electronics Inc. Robot system and control method of the same
CN110993087A (en) * 2019-11-06 2020-04-10 上海交通大学 Remote ultrasonic scanning control equipment and method
CN111449680A (en) * 2020-01-14 2020-07-28 深圳大学 Optimization method of ultrasonic scanning path and ultrasonic equipment
WO2020154921A1 (en) * 2019-01-29 2020-08-06 昆山华大智造云影医疗科技有限公司 Ultrasound scanning control method and system, ultrasound scanning device, and storage medium
CN112472133A (en) * 2020-12-22 2021-03-12 深圳市德力凯医疗设备股份有限公司 Posture monitoring method and device for ultrasonic probe
US20210094178A1 (en) * 2019-09-27 2021-04-01 Lg Electronics Inc. Transporting robot and method for controlling the same
WO2021078066A1 (en) * 2019-10-22 2021-04-29 深圳瀚维智能医疗科技有限公司 Breast ultrasound screening method, apparatus and system
CN112773508A (en) * 2021-02-04 2021-05-11 清华大学 Robot operation positioning method and device
US20210170585A1 (en) * 2018-01-29 2021-06-10 Samsung Electronics Co., Ltd. Robot reacting on basis of user behavior and control method therefor
CN113288204A (en) * 2021-04-21 2021-08-24 佛山纽欣肯智能科技有限公司 Semi-autonomous B-ultrasonic detection system of robot
US20210322105A1 (en) * 2020-04-21 2021-10-21 Siemens Healthcare Gmbh Control of a robotically moved object

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55113952A (en) * 1979-02-15 1980-09-02 Westinghouse Electric Corp Ultrasonic tester
US20070021738A1 (en) * 2005-06-06 2007-01-25 Intuitive Surgical Inc. Laparoscopic ultrasound robotic surgical system
WO2010065786A1 (en) * 2008-12-03 2010-06-10 St. Jude Medical, Atrial Fibrillation Division, Inc. System and method for determining the positioin of the tip of a medical catheter within the body of a patient
WO2012055498A1 (en) * 2010-10-26 2012-05-03 Osio Universitetssykehus Hf Method for myocardial segment work analysis
WO2016026437A1 (en) * 2014-08-19 2016-02-25 Chen Chieh Hsiao Method and system of determining probe position in surgical site
JP5920746B1 (en) * 2015-01-08 2016-05-18 学校法人早稲田大学 Puncture support system
US20180000511A1 (en) * 2015-01-08 2018-01-04 Waseda University Puncture assistance system
US20180250078A1 (en) * 2015-09-10 2018-09-06 Xact Robotics Ltd. Systems and methods for guiding the insertion of a medical tool
EP3505106A1 (en) * 2017-12-28 2019-07-03 Ethicon LLC Estimating state of ultrasonic end effector and control system therefor
US20210170585A1 (en) * 2018-01-29 2021-06-10 Samsung Electronics Co., Ltd. Robot reacting on basis of user behavior and control method therefor
WO2020154921A1 (en) * 2019-01-29 2020-08-06 昆山华大智造云影医疗科技有限公司 Ultrasound scanning control method and system, ultrasound scanning device, and storage medium
CN110488745A (en) * 2019-07-23 2019-11-22 上海交通大学 A kind of human body automatic ultrasonic scanning machine people, controller and control method
US20200046169A1 (en) * 2019-09-10 2020-02-13 Lg Electronics Inc. Robot system and control method of the same
US20210094178A1 (en) * 2019-09-27 2021-04-01 Lg Electronics Inc. Transporting robot and method for controlling the same
WO2021078066A1 (en) * 2019-10-22 2021-04-29 深圳瀚维智能医疗科技有限公司 Breast ultrasound screening method, apparatus and system
CN110993087A (en) * 2019-11-06 2020-04-10 上海交通大学 Remote ultrasonic scanning control equipment and method
CN111449680A (en) * 2020-01-14 2020-07-28 深圳大学 Optimization method of ultrasonic scanning path and ultrasonic equipment
US20210322105A1 (en) * 2020-04-21 2021-10-21 Siemens Healthcare Gmbh Control of a robotically moved object
CN112472133A (en) * 2020-12-22 2021-03-12 深圳市德力凯医疗设备股份有限公司 Posture monitoring method and device for ultrasonic probe
CN112773508A (en) * 2021-02-04 2021-05-11 清华大学 Robot operation positioning method and device
CN113288204A (en) * 2021-04-21 2021-08-24 佛山纽欣肯智能科技有限公司 Semi-autonomous B-ultrasonic detection system of robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114748101A (en) * 2022-06-15 2022-07-15 深圳瀚维智能医疗科技有限公司 Ultrasonic scanning control method, system and computer readable storage medium
CN114748101B (en) * 2022-06-15 2022-11-01 深圳瀚维智能医疗科技有限公司 Ultrasonic scanning control method, system and computer readable storage medium
CN116077089A (en) * 2023-02-28 2023-05-09 北京智源人工智能研究院 Multimode safety interaction method and device for ultrasonic scanning robot

Also Published As

Publication number Publication date
CN114041828B (en) 2022-04-29

Similar Documents

Publication Publication Date Title
US11701773B2 (en) Viewpoint invariant visual servoing of robot end effector using recurrent neural network
CN114041828B (en) Ultrasonic scanning control method, robot and storage medium
JP6721785B2 (en) Deep reinforcement learning for robot operation
US9449416B2 (en) Animation processing of linked object parts
CN108284436B (en) Remote mechanical double-arm system with simulation learning mechanism and method
JP5750657B2 (en) Reinforcement learning device, control device, and reinforcement learning method
JP6671694B1 (en) Machine learning device, machine learning system, data processing system, and machine learning method
US11104001B2 (en) Motion transfer of highly dimensional movements to lower dimensional robot movements
JP2008238396A (en) Apparatus and method for generating and controlling motion of robot
JP7458741B2 (en) Robot control device and its control method and program
Holden et al. Learning an inverse rig mapping for character animation
Field et al. Learning trajectories for robot programing by demonstration using a coordinated mixture of factor analyzers
RU2308762C2 (en) Method for moving a virtual object in virtual environment without mutual interference between its jointed elements
JP4267508B2 (en) Optimization of ergonomic movement of virtual dummy
JP4942924B2 (en) A method of moving a virtual articulated object in a virtual environment by continuous motion
CN110516389A (en) Learning method, device, equipment and the storage medium of behaviour control strategy
CN114310870A (en) Intelligent agent control method and device, electronic equipment and storage medium
Khalifa et al. New model-based manipulation technique for reshaping deformable linear objects
JP7246175B2 (en) Estimation device, training device, estimation method and training method
Jagersand Image based view synthesis of articulated agents
CN114028156A (en) Rehabilitation training method and device and rehabilitation robot
Naderi et al. A reinforcement learning approach to synthesizing climbing movements
Hu et al. Hybrid kinematic and dynamic simulation of running machines
Wang et al. Reinforcement Learning based End-to-End Control of Bimanual Robotic Coordination
US20240054393A1 (en) Learning Device, Learning Method, Recording Medium Storing Learning Program, Control Program, Control Device, Control Method, and Recording Medium Storing Control Program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant