CN117245648A - Robot virtual modeling and touch control method and device - Google Patents
Robot virtual modeling and touch control method and device Download PDFInfo
- Publication number
- CN117245648A CN117245648A CN202310905781.8A CN202310905781A CN117245648A CN 117245648 A CN117245648 A CN 117245648A CN 202310905781 A CN202310905781 A CN 202310905781A CN 117245648 A CN117245648 A CN 117245648A
- Authority
- CN
- China
- Prior art keywords
- virtual
- data
- robot
- force feedback
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000003993 interaction Effects 0.000 claims abstract description 121
- 230000009471 action Effects 0.000 claims abstract description 64
- 230000009466 transformation Effects 0.000 claims abstract description 30
- 230000008569 process Effects 0.000 claims abstract description 24
- 239000000463 material Substances 0.000 claims description 32
- 238000006243 chemical reaction Methods 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 15
- 230000005484 gravity Effects 0.000 claims description 10
- 230000003578 releasing effect Effects 0.000 claims description 5
- 230000000704 physical effect Effects 0.000 claims description 2
- 230000002457 bidirectional effect Effects 0.000 abstract description 6
- 230000003190 augmentative effect Effects 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 7
- 230000008447 perception Effects 0.000 description 4
- 230000006399 behavior Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The application discloses a robot virtual modeling and touch control method and device, wherein the method comprises the following steps: establishing a virtual robot in a virtual environment according to physical entity data of the robot, and establishing connection between a force feedback controller and the virtual robot; the operator operates the force feedback controller and sends the tail end coordinate data of the force feedback controller to the virtual robot; the virtual robot calculates gesture data according to the terminal coordinate data, controls the virtual robot to perform gesture transformation according to the gesture data, acquires target interaction data interacted with the virtual environment in the gesture transformation process, and sends the target interaction data to the force feedback controller; the force feedback controller executes force feedback action according to the target interaction data; the method and the device realize the aim of providing bidirectional immersive tactile interaction for operators and the virtual robot, improve the reality of interaction with the virtual robot, and can be applied to robot control under virtual reality and augmented reality.
Description
Technical Field
The application relates to the technical field of robot simulation and control, in particular to a method and a device for virtual modeling and touch control of a robot.
Background
With the continuous development of manufacturing industry and logistics industry, robots are widely applied to links such as assembly line production, assembly, logistics transportation and the like, and become the core of intelligent factory construction. The robot can complete a plurality of tasks with high repeatability, high risk and high precision requirement, so the robot is particularly important to the high-precision control of the robot.
The robot virtual modeling is a technology for simulating the motion and control of a robot, and is used for the aspects of the design, layout, control and the like of the robot by simulating the action and effect of the robot through a computer. Particularly, the real-time virtual modeling of the robot can simulate, optimize and correct the control of the physical robot, which becomes the key for supporting man-machine interaction and cooperative control.
In the process of interacting with the traditional robot virtual model, only one-way visual and auditory operations can be performed by means of the head-mounted display and the handle, which can lead to a certain difference between the virtual modeling and the actual robot, that is, the traditional robot virtual modeling and interaction have lower authenticity and operability, and the requirements of industrial application are difficult to meet.
Disclosure of Invention
In view of the above, the present application provides a method and apparatus for virtual modeling and haptic control of a robot, which helps to improve the authenticity of interactions between an operator and a virtual robot.
According to one aspect of the present application, there is provided a robot virtual modeling and haptic control method, the method comprising:
establishing a virtual robot in a virtual environment according to physical entity data of the robot, and establishing connection between a force feedback controller and the virtual robot;
an operator operates the force feedback controller and transmits end coordinate data of the force feedback controller to the virtual robot;
the virtual robot calculates gesture data according to the tail end coordinate data, controls the virtual robot to perform gesture conversion according to the gesture data, obtains target interaction data interacted with the virtual environment in the gesture conversion process, and sends the target interaction data to a force feedback controller;
and the force feedback controller executes force feedback action according to the target interaction data.
Optionally, the physical entity data includes model data, mechanical size data, joint number data and movement range data, and the establishing a virtual robot in a virtual environment according to the physical entity data of the robot includes:
Establishing a virtual model of each part of the virtual robot according to model data, mechanical size data, joint number data and movement range data of the robot, combining the virtual models of each part according to actual connection relation of the robot to form a virtual three-dimensional model of the virtual robot, and binding upper and lower relation of each joint point in the virtual three-dimensional model;
and establishing a virtual robot according to the physical attributes of the robot and the bound virtual three-dimensional model, wherein the physical attributes comprise quality, model size and surface material, and the virtual models of all the components comprise a base model, a connecting rod model, a sensor model and a front-end tool model.
Optionally, the binding the upper and lower relationships of each node in the virtual three-dimensional model includes:
setting the base model as a father node, and setting a first link model as a child node of the base model;
setting each connecting rod model as a child node of the previous connecting rod model in sequence according to the preset front-back sequence of the actual connection relation of the robot;
the sensor model and the front end tool model are set as child nodes of the last connecting rod model.
Optionally, the establishing a virtual robot according to the physical attribute of the robot and the bound virtual three-dimensional model includes:
building a rigid body assembly, a collision body assembly and a surface material assembly for the bound virtual three-dimensional model according to the quality, model size and surface material of the robot, and building a virtual robot;
the rigid body component is determined according to the mass of the robot, the collision body component is determined according to the model size of the robot, and the surface material component is determined according to the surface material property of the robot.
Optionally, the calculating the pose data by the virtual robot according to the end coordinate data includes:
establishing a Cartesian coordinate space of the virtual robot, and establishing an independent coordinate system of each joint of the virtual robot in the Cartesian coordinate space according to the tail end coordinate data;
determining a transformation matrix of adjacent joints of the virtual robot according to independent coordinate systems of all joints of the virtual robot;
and according to the transformation matrix of each adjacent joint, calculating the posture information of each joint of the virtual robot in sequence to obtain posture data.
Optionally, the acquiring the target interaction data of the interaction with the virtual environment in the gesture transformation process, and sending the target interaction data to a force feedback controller, includes:
Detecting whether the virtual robot contacts with the virtual environment in the gesture conversion process, acquiring collision contact position data and collision virtual attribute data of the contacted virtual environment when the virtual robot contacts, taking the collision contact position data and the collision virtual attribute data as collision target interaction data, and sending the collision target interaction data to a force feedback controller, wherein the virtual environment comprises a virtual object;
if the operator presses a grabbing button while the virtual robot is in contact with the virtual object, controlling the virtual robot to execute grabbing action, acquiring grabbing contact position data and grabbing virtual attribute data, taking the grabbing contact position data and the grabbing virtual attribute data as grabbing target interaction data, and sending the grabbing target interaction data to a force feedback controller; and if the operator releases the grabbing button, controlling the virtual robot to execute a releasing action, wherein the target interaction data comprise collision target interaction data and grabbing target interaction data.
Optionally, the virtual attribute data includes a rigid body component parameter, a collision body component parameter, and a surface material component parameter;
Wherein the rigid body component parameters are read from the rigid body components of the contacted virtual object and are determined by the quality of the contacted virtual object; the collider component parameters are read from the collider component of the contacted virtual object and are determined by the model contour of the contacted virtual object; the surface texture component parameters are read from the surface texture component of the contacted virtual object and are determined from the hardness degree data, the viscosity degree data and the roughness degree data of the contacted virtual object surface.
Optionally, the force feedback action includes a gravity feedback action, an elastic force feedback action, a viscous force feedback action, and a friction force feedback action, and the force feedback controller performs the force feedback action according to the target interaction data, including:
the force feedback controller executes the gravity feedback action according to the contact position data and the mass of the contacted virtual object;
the force feedback controller executes the elastic force feedback action according to the contact position data and the hardness degree data of the contacted virtual object;
the force feedback controller executes the viscous force feedback action according to the contact position data and the viscosity degree data of the contacted virtual object;
And the force feedback controller executes the friction force feedback action according to the contact position data and the roughness data of the contacted virtual object.
According to another aspect of the present application, there is provided a robot virtual modeling and haptic control device, the device comprising:
the building module is used for building a virtual robot in a virtual environment according to physical entity data of the robot and building connection between a force feedback controller and the virtual robot;
the operation module is used for an operator to operate the force feedback controller and send the tail end coordinate data of the force feedback controller to the virtual robot;
the interaction module is used for calculating gesture data according to the tail end coordinate data by the virtual robot, controlling the virtual robot to perform gesture conversion according to the gesture data, acquiring target interaction data interacted with the virtual environment in the gesture conversion process, and sending the target interaction data to the force feedback controller;
and the feedback module is used for the force feedback controller to execute force feedback action according to the target interaction data.
Optionally, the building module is further configured to:
Establishing a virtual model of each part of the virtual robot according to model data, mechanical size data, joint number data and movement range data of the robot, combining the virtual models of each part according to actual connection relation of the robot to form a virtual three-dimensional model of the virtual robot, and binding upper and lower relation of each joint point in the virtual three-dimensional model;
and establishing a virtual robot according to the physical attributes of the robot and the bound virtual three-dimensional model, wherein the physical attributes comprise quality, model size and surface material, and the virtual models of all the components comprise a base model, a connecting rod model, a sensor model and a front-end tool model.
Optionally, the building module is further configured to:
setting the base model as a father node, and setting a first link model as a child node of the base model;
setting each connecting rod model as a child node of the previous connecting rod model in sequence according to the preset front-back sequence of the actual connection relation of the robot;
the sensor model and the front end tool model are set as child nodes of the last connecting rod model.
Optionally, the building module is further configured to:
building a rigid body assembly, a collision body assembly and a surface material assembly for the bound virtual three-dimensional model according to the quality, model size and surface material of the robot, and building a virtual robot;
the rigid body component is determined according to the mass of the robot, the collision body component is determined according to the model size of the robot, and the surface material component is determined according to the surface material property of the robot.
Optionally, the interaction module is further configured to:
establishing a Cartesian coordinate space of the virtual robot, and establishing an independent coordinate system of each joint of the virtual robot in the Cartesian coordinate space according to the tail end coordinate data;
determining a transformation matrix of adjacent joints of the virtual robot according to independent coordinate systems of all joints of the virtual robot;
and according to the transformation matrix of each adjacent joint, calculating the posture information of each joint of the virtual robot in sequence to obtain posture data.
Optionally, the interaction module is further configured to:
detecting whether the virtual robot contacts with the virtual environment in the gesture conversion process, acquiring collision contact position data and collision virtual attribute data of the contacted virtual environment when the virtual robot contacts, taking the collision contact position data and the collision virtual attribute data as collision target interaction data, and sending the collision target interaction data to a force feedback controller, wherein the virtual environment comprises a virtual object;
If the operator presses a grabbing button while the virtual robot is in contact with the virtual object, controlling the virtual robot to execute grabbing action, acquiring grabbing contact position data and grabbing virtual attribute data, taking the grabbing contact position data and the grabbing virtual attribute data as grabbing target interaction data, and sending the grabbing target interaction data to a force feedback controller; and if the operator releases the grabbing button, controlling the virtual robot to execute a releasing action, wherein the target interaction data comprise collision target interaction data and grabbing target interaction data.
Optionally, the virtual attribute data includes a rigid body component parameter, a collision body component parameter, and a surface material component parameter;
wherein the rigid body component parameters are read from the rigid body components of the contacted virtual object and are determined by the quality of the contacted virtual object; the collider component parameters are read from the collider component of the contacted virtual object and are determined by the model contour of the contacted virtual object; the surface texture component parameters are read from the surface texture component of the contacted virtual object and are determined from the hardness degree data, the viscosity degree data and the roughness degree data of the contacted virtual object surface.
Optionally, the feedback module is further configured to:
the force feedback controller executes the gravity feedback action according to the contact position data and the mass of the contacted virtual object;
the force feedback controller executes the elastic force feedback action according to the contact position data and the hardness degree data of the contacted virtual object;
the force feedback controller executes the viscous force feedback action according to the contact position data and the viscosity degree data of the contacted virtual object;
and the force feedback controller executes the friction force feedback action according to the contact position data and the roughness data of the contacted virtual object.
By means of the technical scheme, the virtual modeling and touch control method and device for the robot accurately establish the virtual robot through the physical entity data of the robot, an operator controls the force feedback controller to control the virtual robot to perform gesture transformation according to the tail end coordinate data, target interaction data interacted with a virtual environment in the gesture transformation process are obtained, the force feedback controller performs force feedback action according to the target interaction data, force touch perception is provided for the operator, the purpose of providing bidirectional immersive touch interaction for the operator and the virtual robot is achieved, the reality of interaction with the virtual robot is improved, and the method and device can be applied to robot control under virtual reality and augmented reality.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a schematic flow chart of a robot virtual modeling and haptic control method according to an embodiment of the present application;
FIG. 2 is a flow chart of another method for virtual modeling and haptic control of a robot according to an embodiment of the present application;
fig. 3 shows a schematic structural diagram of a robot virtual modeling and haptic control device according to an embodiment of the present application.
Detailed Description
The present application will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
In the prior art, the virtual robot is mainly static presentation of the physical robot, and cannot simulate dynamic behaviors and attributes of the physical robot in the real world, such as collision, quality and the like, which can cause certain differences between the virtual robot and the physical robot, and the virtual robot cannot accurately simulate a series of behaviors of the physical robot in the real world; in the interaction process with the traditional virtual robot, only one-way visual and auditory operations can be carried out by means of the head-mounted display and the handle, and two-way immersive touch interaction is difficult to carry out, namely, the creation and interaction of the traditional virtual robot have lower authenticity and operability, and the requirements of industrial application are difficult to meet.
In this embodiment, a method for virtual modeling and haptic control of a robot is provided, as shown in fig. 1, and includes:
and step 101, establishing a virtual robot in a virtual environment according to physical entity data of the robot, and establishing connection between a force feedback controller and the virtual robot.
The embodiment of the application can be applied to the interaction process with the virtual robot, so that the haptic interaction with the virtual robot is realized, and the embodiment of the application can also be applied to robot control under virtual reality and augmented reality. First, a virtual robot is established in a virtual environment according to physical entity data of the robot, a connection between a force feedback controller and the virtual robot is established, for example, the virtual robot may be established in virtual reality glasses, a connection between the force feedback controller and the virtual robot may be established through a wireless network, an interface, etc., the virtual robot may be accurately established through physical entity data of the robot, and a connection between the force feedback controller and the virtual robot may be established in preparation for realizing haptic interaction next.
Step 102, an operator operates the force feedback controller and sends end coordinate data of the force feedback controller to the virtual robot.
And then, an operator operates the force feedback controller and sends the tail end coordinate data of the force feedback controller to the virtual robot, so that unidirectional tactile interaction from the operator to the virtual robot is realized, and the virtual robot performs data interaction according to the tail end coordinate data.
And 103, calculating gesture data by the virtual robot according to the tail end coordinate data, controlling the virtual robot to perform gesture transformation according to the gesture data, acquiring target interaction data interacted with the virtual environment in the gesture transformation process, and sending the target interaction data to a force feedback controller.
And then, controlling the virtual robot to simulate the corresponding gesture of the tail end coordinate data, and calculating gesture data through the tail end coordinate data, wherein the virtual robot is required to perform gesture conversion according to the gesture data, so as to obtain target interaction data of interaction between the virtual robot and the virtual environment in the gesture conversion process, complete control and gesture change of the virtual robot, and send the target interaction data to a force feedback controller, thereby facilitating the realization of touch interaction.
And 104, the force feedback controller executes force feedback action according to the target interaction data.
And then, the force feedback controller executes force feedback action according to the target interaction data to complete bidirectional tactile interaction, and the force feedback controller can provide force feedback information with multiple degrees of freedom and give force tactile perception to an operator.
By applying the technical scheme of the embodiment, the virtual robot is accurately built through the physical entity data of the robot, an operator controls the force feedback controller to control the virtual robot to perform gesture conversion according to the tail end coordinate data, target interaction data with the virtual environment in the gesture conversion process is obtained, the force feedback controller performs force feedback action according to the target interaction data to provide force touch perception for the operator, the purpose of providing bidirectional immersive touch interaction for the operator and the virtual robot is achieved, and the reality of interaction with the virtual robot is improved.
Further, as a refinement and extension of the foregoing embodiment, for fully explaining the implementation process of the embodiment, the physical properties include quality, model size and surface texture, and the target interaction data includes collision target interaction data and capture target interaction data, another robot virtual modeling and haptic control method is provided, as shown in fig. 2, which includes:
Step 201, building virtual models of all parts of the virtual robot according to model data, mechanical size data, joint number data and movement range data of the robot, combining the virtual models of all parts according to actual connection relations of the robot to form a virtual three-dimensional model of the virtual robot, and binding upper and lower relations of all joint points in the virtual three-dimensional model.
In the embodiment of the application, firstly, according to model data, mechanical size data, joint number data and movement range data of the robot, establishing virtual models of all parts of the virtual robot, wherein the virtual models of all parts comprise a base model, a connecting rod model, a sensor model, a front end tool model and the like; and combining the virtual models of the parts according to the actual connection relation of the robots to form a virtual three-dimensional model of the virtual robots, and binding the upper and lower relation of each joint point.
Step 202, constructing a rigid body assembly, a collision body assembly and a surface material assembly for the bound virtual three-dimensional model according to the quality, model size and surface material of the robot, and constructing a virtual robot; a connection between a force feedback controller and the virtual robot is established.
Next, the rigid body assembly is determined according to the mass of the robot, the collision body assembly is determined according to the model size of the robot, the surface material assembly is determined according to the surface material of the robot, the rigid body assembly is built for the virtual three-dimensional model according to the mass of the robot, the collision body assembly is built for the virtual three-dimensional model according to the model size of the robot, the surface material assembly is built for the virtual three-dimensional model according to the surface material of the robot, the virtual robot is built, the connection between the force feedback controller and the virtual robot is built, and the haptic interaction attribute of the virtual robot is endowed by adding the rigid body assembly, the collision body assembly and the surface material assembly, so that the reality of interaction is improved.
In step 203, an operator operates the force feedback controller and transmits end coordinate data of the force feedback controller to the virtual robot.
Next, an operator operates the force feedback controller and transmits end coordinate data of the force feedback controller to the virtual robot.
Step 204, establishing a Cartesian coordinate space of the virtual robot, and establishing an independent coordinate system of each joint of the virtual robot in the Cartesian coordinate space according to the tail end coordinate data; and determining a transformation matrix of adjacent joints of the virtual robot according to the independent coordinate system of each joint of the virtual robot, and sequentially calculating the posture information of each joint of the virtual robot according to the transformation matrix of each adjacent joint to obtain posture data.
Then, a Cartesian coordinate space of the virtual robot is established according to a preset creation rule, and an independent coordinate system of each joint of the virtual robot is established in the Cartesian coordinate space according to the tail end coordinate data, wherein the independent coordinate system of each joint is established by adopting a right-hand rule; according to the independent coordinate system of each joint of the virtual robot, determining a transformation matrix of adjacent joints of the virtual robot as follows:
wherein i represents an i-th joint in the virtual robot,transformation matrix, θ, representing the i-1 th joint and the i-th joint i Is the joint angle variable of the ith joint, l i-1 Represents the distance along the X axis between the i-1 th joint and the i-th joint, d i Represents the translational distance between the i-1 th joint and the i-th joint along the Z axis, alpha i-1 Indicating the rotation angle between the i-1 th joint and the i-th joint about the X axis;
based on each of the adjacent jointsThe transformation matrix is used for sequentially calculating the posture information of each joint of the virtual robot, for example, the posture information of the tail end of the robot relative to the joint 1 can be calculated asWherein T is i i+1 Transform matrices representing the ith and (i+1) th, two adjacent joints, T 1 End And representing a transformation matrix between the 1 st joint and the tail end position, obtaining gesture data, and determining gesture data of the virtual robot, which need to be transformed, by calculating the transformation matrix of the adjacent joints so as to prepare for the next completion of haptic interaction.
Step 205, controlling the virtual robot to perform gesture transformation according to the gesture data, detecting whether the virtual robot is in contact with the virtual environment in the gesture transformation process, acquiring collision contact position data and collision virtual attribute data of the contacted virtual environment when the virtual robot is in contact, taking the collision contact position data and the collision virtual attribute data as collision target interaction data, and sending the collision target interaction data to a force feedback controller.
Next, the target interaction data comprise collision target interaction data and grabbing target interaction data, the virtual robot is controlled to conduct gesture transformation according to the gesture data, whether the virtual robot is in contact with the virtual environment in the gesture transformation process is detected, the virtual environment comprises a virtual object, and the virtual robot is possibly in contact with the virtual environment or the virtual object in the virtual environment in the gesture transformation process; when the virtual robot contacts with the virtual environment or the virtual object, collision contact position data and collision virtual attribute data of the contacted virtual environment or the contacted virtual object are obtained, the collision contact position data and the collision virtual attribute data are used as collision target interaction data, the collision target interaction data are sent to the force feedback controller, and the target interaction data of the virtual robot in the gesture conversion process are obtained and sent to the force feedback controller, so that data interaction with the virtual robot is realized, and haptic interaction is convenient to realize next.
Step 206, if the operator presses a capture button while the virtual robot is in contact with the virtual environment, controlling the virtual robot to execute capture action, acquiring capture contact position data and capture virtual attribute data, taking the capture contact position data and the capture virtual attribute data as capture target interaction data, and sending the capture target interaction data to a force feedback controller; and if the operator releases the grabbing button, controlling the virtual robot to execute a releasing action.
And when the operator releases the button, triggering a release function, controlling the virtual robot to execute a release action, wherein the fixed joint component can be added at the contact position to acquire the grabbing virtual attribute data, the fixed joint component is used for connecting the virtual robot with the virtual object, simulating the grabbing effect, correspondingly, when the release function is triggered, deleting the fixed joint component and the grabbing virtual attribute data, ending the grabbing action, controlling the virtual robot to execute the action corresponding to the gesture conversion through the operation of the operator, acquiring the target interaction data in conversion, and transmitting the target interaction data to the force feedback controller.
Step 207, the force feedback controller executes a force feedback action according to the target interaction data.
And then, the force feedback controller executes force feedback action according to the target interaction data, gives the operator a force touch sense and realizes touch interaction with the virtual robot.
Optionally, in step 201, the binding the upper-lower relationship of each node in the virtual three-dimensional model includes: setting the base model as a father node, and setting a first link model as a child node of the base model; setting each connecting rod model as a child node of the previous connecting rod model in sequence according to the preset front-back sequence of the actual connection relation of the robot; the sensor model and the front end tool model are set as child nodes of the last connecting rod model.
In the above embodiment of the present application, the link model includes a plurality of link models, and distinguishes through the order around predetermining, set up above-mentioned base model as father node, and set up first link model as the child node of above-mentioned base model, according to the order around predetermining of the actual connection relation of above-mentioned robot, set up each link model as the child node of former link model in proper order, set up the child node of last link model with above-mentioned sensor model and above-mentioned front end tool model, accomplish the upper and lower relation binding of each articulation point, be convenient for realize virtual robot control.
Optionally, the force feedback action includes a gravity feedback action, an elastic force feedback action, a viscous force feedback action, and a friction force feedback action; the step 207 of "the force feedback controller performs a force feedback action according to the target interaction data" includes: the force feedback controller executes the gravity feedback action according to the contact position data and the mass of the contacted virtual object; the force feedback controller executes the elastic force feedback action according to the contact position data and the hardness degree data of the contacted virtual object; the force feedback controller executes the viscous force feedback action according to the contact position data and the viscosity degree data of the contacted virtual object; and the force feedback controller executes the friction force feedback action according to the contact position data and the roughness data of the contacted virtual object.
In the above embodiment of the present application, the force feedback controller performs the gravity feedback action according to the contact position data and the mass of the contacted virtual object, for example, increases torque downward according to the contact position data and the mass of the contacted virtual object, and simulates a gravity feedback effect when lifting the object; the force feedback controller executes the elastic force feedback action according to the contact position data and the hardness degree data of the contacted virtual object, for example, torque is increased to the contacted direction according to the contact position data and the hardness degree data of the contacted virtual object, and the elastic force effect of the object surface is simulated; the force feedback controller executes the viscous force feedback action according to the contact position data and the viscosity degree data of the contacted virtual object, for example, increases torque to the opposite direction of the contacted direction according to the contact position data and the viscosity degree data of the contacted virtual object, and simulates the elastic force effect of the object surface; the force feedback controller executes the friction force feedback action according to the contact position data and the roughness data of the contacted virtual object, for example, torque is increased to be perpendicular to the contacted direction according to the contact position data and the roughness data of the contacted virtual object, and the friction force effect of the object surface is simulated, so that an operator perceives the touch effect through the force feedback controller, and the authenticity of interaction with the robot is improved.
The virtual attribute data includes a rigid body component parameter, a collision body component parameter, and a surface material component parameter; the rigid body component parameters are read from the rigid body components of the contacted virtual object and are determined by the quality of the contacted virtual object; the collider component parameters are read from the collider component of the contacted virtual object and are determined by the model contour of the contacted virtual object; the surface texture component parameter is read from the surface texture component of the contacted virtual object and is determined by the hardness degree data, the viscosity degree data and the roughness degree data of the contacted virtual object surface.
In the above-described embodiments of the present application, the above-described rigid body component parameters are determined by the mass of the contacted virtual object and are read from the rigid body component of the contacted virtual object; the above-mentioned collider assembly parameter is decided by model outline of the virtual object contacted, and read from the collider assembly of the virtual object contacted; the surface texture component parameters are determined by the hardness degree data, the viscosity degree data and the roughness degree data of the surface of the contacted virtual object.
By applying the technical scheme of the embodiment, the virtual robot is established through the entity robot, the tactile interaction attribute of the virtual robot is given by adding the rigid body component, the collision body component and the surface material component, and the target interaction data in the virtual environment is obtained when the gesture of the virtual robot is controlled by real-time tactile sensation, so that the force tactile feedback is provided for an operator, the bidirectional immersive tactile interaction is provided for the operator and the virtual robot, and the authenticity of the interaction is improved.
Further, as a specific implementation of the method of fig. 1, an embodiment of the present application provides a robot virtual modeling and haptic control device, as shown in fig. 3, including:
the building module is used for building a virtual robot in a virtual environment according to physical entity data of the robot and building connection between a force feedback controller and the virtual robot;
the operation module is used for an operator to operate the force feedback controller and send the tail end coordinate data of the force feedback controller to the virtual robot;
the interaction module is used for calculating gesture data according to the tail end coordinate data by the virtual robot, controlling the virtual robot to perform gesture conversion according to the gesture data, acquiring target interaction data interacted with the virtual environment in the gesture conversion process, and sending the target interaction data to the force feedback controller;
and the feedback module is used for the force feedback controller to execute force feedback action according to the target interaction data.
Optionally, the building module is further configured to:
establishing a virtual model of each part of the virtual robot according to model data, mechanical size data, joint number data and movement range data of the robot, combining the virtual models of each part according to actual connection relation of the robot to form a virtual three-dimensional model of the virtual robot, and binding upper and lower relation of each joint point in the virtual three-dimensional model;
And establishing a virtual robot according to the physical attributes of the robot and the bound virtual three-dimensional model, wherein the physical attributes comprise quality, model size and surface material, and the virtual models of all the components comprise a base model, a connecting rod model, a sensor model and a front-end tool model.
Optionally, the building module is further configured to:
setting the base model as a father node, and setting a first link model as a child node of the base model;
setting each connecting rod model as a child node of the previous connecting rod model in sequence according to the preset front-back sequence of the actual connection relation of the robot;
the sensor model and the front end tool model are set as child nodes of the last connecting rod model.
Optionally, the building module is further configured to:
building a rigid body assembly, a collision body assembly and a surface material assembly for the bound virtual three-dimensional model according to the quality, model size and surface material of the robot, and building a virtual robot;
the rigid body component is determined according to the mass of the robot, the collision body component is determined according to the model size of the robot, and the surface material component is determined according to the surface material property of the robot.
Optionally, the interaction module is further configured to:
establishing a Cartesian coordinate space of the virtual robot, and establishing an independent coordinate system of each joint of the virtual robot in the Cartesian coordinate space according to the tail end coordinate data;
determining a transformation matrix of adjacent joints of the virtual robot according to independent coordinate systems of all joints of the virtual robot;
and according to the transformation matrix of each adjacent joint, calculating the posture information of each joint of the virtual robot in sequence to obtain posture data.
Optionally, the interaction module is further configured to:
detecting whether the virtual robot contacts with the virtual environment in the gesture conversion process, acquiring collision contact position data and collision virtual attribute data of the contacted virtual environment when the virtual robot contacts, taking the collision contact position data and the collision virtual attribute data as collision target interaction data, and sending the collision target interaction data to a force feedback controller, wherein the virtual environment comprises a virtual object;
if the operator presses a grabbing button while the virtual robot is in contact with the virtual object, controlling the virtual robot to execute grabbing action, acquiring grabbing contact position data and grabbing virtual attribute data, taking the grabbing contact position data and the grabbing virtual attribute data as grabbing target interaction data, and sending the grabbing target interaction data to a force feedback controller; and if the operator releases the grabbing button, controlling the virtual robot to execute a releasing action.
Optionally, the virtual attribute data includes a rigid body component parameter, a collision body component parameter, and a surface material component parameter;
wherein the rigid body component parameters are read from the rigid body components of the contacted virtual object and are determined by the quality of the contacted virtual object; the collider component parameters are read from the collider component of the contacted virtual object and are determined by the model contour of the contacted virtual object; the surface texture component parameters are read from the surface texture component of the contacted virtual object and are determined from the hardness degree data, the viscosity degree data and the roughness degree data of the contacted virtual object surface.
Optionally, the feedback module is further configured to:
the force feedback controller executes the gravity feedback action according to the contact position data and the mass of the contacted virtual object;
the force feedback controller executes the elastic force feedback action according to the contact position data and the hardness degree data of the contacted virtual object;
the force feedback controller executes the viscous force feedback action according to the contact position data and the viscosity degree data of the contacted virtual object;
and the force feedback controller executes the friction force feedback action according to the contact position data and the roughness data of the contacted virtual object.
It should be noted that, in the description of the virtual modeling and haptic control device for a robot according to the embodiment of the present application, the corresponding description of each functional unit related to the method of fig. 1 to 2 may be referred to, and will not be repeated here.
Through the description of the above embodiments, it can be clearly understood by those skilled in the art that the present application may be implemented by means of software plus a necessary general hardware platform, or may be implemented by hardware, by accurately establishing a virtual robot through physical entity data of the robot, an operator controls a force feedback controller to control the virtual robot to perform gesture transformation according to terminal coordinate data, obtain target interaction data with a virtual environment in the gesture transformation process, and the force feedback controller performs force feedback action according to the target interaction data, so as to provide force touch perception for the operator, achieve the goal of providing bidirectional immersion type touch interaction for the operator and the virtual robot, and improve the authenticity of interaction with the virtual robot.
Those skilled in the art will appreciate that the drawings are merely schematic illustrations of one preferred implementation scenario, and that the modules or flows in the drawings are not necessarily required to practice the present application. Those skilled in the art will appreciate that modules in an apparatus in an implementation scenario may be distributed in an apparatus in an implementation scenario according to an implementation scenario description, or that corresponding changes may be located in one or more apparatuses different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The foregoing application serial numbers are merely for description, and do not represent advantages or disadvantages of the implementation scenario. The foregoing disclosure is merely a few specific implementations of the present application, but the present application is not limited thereto and any variations that can be considered by a person skilled in the art shall fall within the protection scope of the present application.
Claims (9)
1. A method for virtual modeling and haptic control of a robot, the method comprising:
establishing a virtual robot in a virtual environment according to physical entity data of the robot, and establishing connection between a force feedback controller and the virtual robot;
an operator operates the force feedback controller and transmits end coordinate data of the force feedback controller to the virtual robot;
the virtual robot calculates gesture data according to the tail end coordinate data, controls the virtual robot to perform gesture conversion according to the gesture data, obtains target interaction data interacted with the virtual environment in the gesture conversion process, and sends the target interaction data to a force feedback controller;
and the force feedback controller executes force feedback action according to the target interaction data.
2. The method of claim 1, wherein the physical entity data includes model data, machine dimension data, joint number data, and range of motion data, wherein establishing a virtual robot in a virtual environment from the physical entity data of the robot comprises:
Establishing a virtual model of each part of the virtual robot according to model data, mechanical size data, joint number data and movement range data of the robot, combining the virtual models of each part according to actual connection relation of the robot to form a virtual three-dimensional model of the virtual robot, and binding upper and lower relation of each joint point in the virtual three-dimensional model;
and establishing a virtual robot according to the physical attributes of the robot and the bound virtual three-dimensional model, wherein the physical attributes comprise quality, model size and surface material, and the virtual models of all the components comprise a base model, a connecting rod model, a sensor model and a front-end tool model.
3. The method of claim 2, wherein binding the upper and lower relationships of each node in the virtual three-dimensional model comprises:
setting the base model as a father node, and setting a first link model as a child node of the base model;
setting each connecting rod model as a child node of the previous connecting rod model in sequence according to the preset front-back sequence of the actual connection relation of the robot;
the sensor model and the front end tool model are set as child nodes of the last connecting rod model.
4. The method of claim 2, wherein the establishing a virtual robot based on the physical properties of the robot and the bound virtual three-dimensional model comprises:
building a rigid body assembly, a collision body assembly and a surface material assembly for the bound virtual three-dimensional model according to the quality, model size and surface material of the robot, and building a virtual robot;
the rigid body component is determined according to the mass of the robot, the collision body component is determined according to the model size of the robot, and the surface material component is determined according to the surface material property of the robot.
5. The method of claim 1, wherein the virtual robot calculating pose data from the end coordinate data comprises:
establishing a Cartesian coordinate space of the virtual robot, and establishing an independent coordinate system of each joint of the virtual robot in the Cartesian coordinate space according to the tail end coordinate data;
determining a transformation matrix of adjacent joints of the virtual robot according to independent coordinate systems of all joints of the virtual robot;
and according to the transformation matrix of each adjacent joint, calculating the posture information of each joint of the virtual robot in sequence to obtain posture data.
6. The method of claim 1, wherein the target interaction data comprises collision target interaction data and capture target interaction data, the obtaining target interaction data for interaction with the virtual environment during the gesture transformation, and the sending the target interaction data to a force feedback controller, comprises:
detecting whether the virtual robot contacts with the virtual environment in the gesture conversion process, acquiring collision contact position data and collision virtual attribute data of the contacted virtual environment when the virtual robot contacts, taking the collision contact position data and the collision virtual attribute data as collision target interaction data, and sending the collision target interaction data to a force feedback controller, wherein the virtual environment comprises a virtual object;
if the operator presses a grabbing button while the virtual robot is in contact with the virtual object, controlling the virtual robot to execute grabbing action, acquiring grabbing contact position data and grabbing virtual attribute data, taking the grabbing contact position data and the grabbing virtual attribute data as grabbing target interaction data, and sending the grabbing target interaction data to a force feedback controller; and if the operator releases the grabbing button, controlling the virtual robot to execute a releasing action.
7. The method of claim 6, wherein the virtual attribute data comprises rigid body component parameters, collision body component parameters, and surface texture component parameters;
wherein the rigid body component parameters are read from the rigid body components of the contacted virtual object and are determined by the quality of the contacted virtual object; the collider component parameters are read from the collider component of the contacted virtual object and are determined by the model contour of the contacted virtual object; the surface texture component parameters are read from the surface texture component of the contacted virtual object and are determined from the hardness degree data, the viscosity degree data and the roughness degree data of the contacted virtual object surface.
8. The method of claim 7, wherein the force feedback actions include a gravity feedback action, an elastic force feedback action, a viscous force feedback action, and a friction force feedback action, the force feedback controller performing a force feedback action based on the target interaction data, comprising:
the force feedback controller executes the gravity feedback action according to the contact position data and the mass of the contacted virtual object;
the force feedback controller executes the elastic force feedback action according to the contact position data and the hardness degree data of the contacted virtual object;
The force feedback controller executes the viscous force feedback action according to the contact position data and the viscosity degree data of the contacted virtual object;
and the force feedback controller executes the friction force feedback action according to the contact position data and the roughness data of the contacted virtual object.
9. A robotic virtual modeling and haptic control device, the device comprising:
the building module is used for building a virtual robot in a virtual environment according to physical entity data of the robot and building connection between a force feedback controller and the virtual robot;
the operation module is used for an operator to operate the force feedback controller and send the tail end coordinate data of the force feedback controller to the virtual robot;
the interaction module is used for calculating gesture data according to the tail end coordinate data by the virtual robot, controlling the virtual robot to perform gesture conversion according to the gesture data, acquiring target interaction data interacted with the virtual environment in the gesture conversion process, and sending the target interaction data to the force feedback controller;
and the feedback module is used for the force feedback controller to execute force feedback action according to the target interaction data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310905781.8A CN117245648A (en) | 2023-07-21 | 2023-07-21 | Robot virtual modeling and touch control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310905781.8A CN117245648A (en) | 2023-07-21 | 2023-07-21 | Robot virtual modeling and touch control method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117245648A true CN117245648A (en) | 2023-12-19 |
Family
ID=89135777
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310905781.8A Pending CN117245648A (en) | 2023-07-21 | 2023-07-21 | Robot virtual modeling and touch control method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117245648A (en) |
-
2023
- 2023-07-21 CN CN202310905781.8A patent/CN117245648A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Pan et al. | Augmented reality-based robot teleoperation system using RGB-D imaging and attitude teaching device | |
CN110394780A (en) | The simulator of robot | |
KR102001214B1 (en) | Apparatus and method for dual-arm robot teaching based on virtual reality | |
CN104002296A (en) | Robot simulator, robot teaching apparatus and robot teaching method | |
KR101876845B1 (en) | Robot control apparatus | |
CN108284425A (en) | A kind of hot line robot mechanical arm cooperation force feedback master-slave control method and system | |
CN107257946B (en) | System for virtual debugging | |
CN112847336B (en) | Action learning method and device, storage medium and electronic equipment | |
Safaric et al. | Control of robot arm with virtual environment via the internet | |
Nandikolla et al. | Teleoperation Robot Control of a Hybrid EEG‐Based BCI Arm Manipulator Using ROS | |
JPS6179589A (en) | Operating device for robot | |
Villaverde et al. | Passive internet-based crane teleoperation with haptic aids | |
CN112967558A (en) | Virtual simulation teaching system for virtual-real combined welding robot | |
CN117245648A (en) | Robot virtual modeling and touch control method and device | |
CN112947238B (en) | Industrial robot real-time control system based on VR technique | |
JP7442413B2 (en) | Simulation equipment and simulation system | |
CN115481489A (en) | System and method for verifying suitability of body-in-white and production line based on augmented reality | |
KR20170116310A (en) | System and method for task teaching | |
Kawasaki et al. | Teaching for multi-fingered robots based on motion intention in virtual reality | |
Su et al. | Manipulation system design for industrial robot manipulators based on tablet PC | |
Krause et al. | Haptic interaction with non-rigid materials for assembly and dissassembly in product development | |
Banda et al. | Investigations on collaborative remote control of virtual robotic manipulators by using a Kinect v2 sensor | |
JP7358550B2 (en) | Information provision device and information provision method | |
Stone | Virtual reality: A tool for telepresence and human factors research | |
Kahaner | Virtual reality in Japan |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |