CN113843796B - Data transmission method and device, online robot control method and device, and online robot - Google Patents

Data transmission method and device, online robot control method and device, and online robot Download PDF

Info

Publication number
CN113843796B
CN113843796B CN202111160198.6A CN202111160198A CN113843796B CN 113843796 B CN113843796 B CN 113843796B CN 202111160198 A CN202111160198 A CN 202111160198A CN 113843796 B CN113843796 B CN 113843796B
Authority
CN
China
Prior art keywords
robot
virtual
current
measured value
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111160198.6A
Other languages
Chinese (zh)
Other versions
CN113843796A (en
Inventor
陈鑫
顾捷
谢青
牛传欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Fourier Intelligence Co Ltd
Original Assignee
Shanghai Fourier Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Fourier Intelligence Co Ltd filed Critical Shanghai Fourier Intelligence Co Ltd
Priority to CN202111160198.6A priority Critical patent/CN113843796B/en
Publication of CN113843796A publication Critical patent/CN113843796A/en
Application granted granted Critical
Publication of CN113843796B publication Critical patent/CN113843796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0237Stretching or bending or torsioning apparatus for exercising for the lower limbs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0274Stretching or bending or torsioning apparatus for exercising for the upper limbs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/12Driving means
    • A61H2201/1207Driving means with electric or magnetic drive
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1657Movement of interface, i.e. force application means
    • A61H2201/1659Free spatial automatic movement of interface within a working area, e.g. Robot
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2205/00Devices for specific parts of the body
    • A61H2205/06Arms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2205/00Devices for specific parts of the body
    • A61H2205/10Leg
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Rehabilitation Therapy (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Manipulator (AREA)

Abstract

The application relates to the technical field of robots and discloses a data transmission method. The method comprises the following steps: obtaining a first measured value of a parameter in a previous data transmission period and a corresponding first moment; obtaining a second measured value of the parameter in the current data transmission period and a corresponding second moment; determining the change rate of the parameter according to the first measured value, the first moment, the second measured value and the second moment; determining a current measured value received at the current moment according to the change rate of the parameter, the current moment, the second moment and the second measured value; and transmitting the current measured value to a prediction controller to obtain a current true value corresponding to the current measured value output by the prediction controller. By adopting the method, the current true value obtained by the predictive controller can be more fit with the actual value, so that the physical engine can better simulate the collision and other scenes of two virtual objects, and the user experience is improved. The application also discloses a data transmission device, a control method and a control device of the online robot and the online robot.

Description

Data transmission method and device, online robot control method and device, and online robot
Technical Field
The present invention relates to the field of robot technologies, and for example, to a data transmission method and device, a control method and device for an online robot, and an online robot.
Background
The physics engine calculates motion, rotation, and collision responses by assigning real physical properties to rigid objects. The physical engine can simulate the motion state of the virtual object in various virtual environments, after the physical engine is combined with the robot, the stress state, the motion state and the like of the virtual object in the virtual environments can be fed back to the user through the robot, so that the user obtains more real touch experience.
After the physical engine is combined with the robots, the mature network technology is adopted, so that virtual objects corresponding to the robots can interact in one virtual environment, and in the process of limb training or rehabilitation training of a user, interaction with other users can be carried out, and the interestingness in the limb training or rehabilitation training is improved.
The physical engine can simulate collision of virtual objects and the like, and in the process of simulating virtual objects corresponding to a plurality of robots in one virtual environment, in order to better simulate scenes such as collision and the like, the real-time performance requirements on motion information and/or force information transmitted by each robot are high.
In the prior art, the force information and the position information of the robot can be predicted through a Kalman filter, so that time delay is restrained, and the instantaneity of the force information and the position information is improved.
In the process of implementing the embodiment of the present application, it is found that at least the following problems exist in the related art:
under the condition that the time delay is fixed, the Kalman filter is adopted to predict the force information or the position information of the robot, so that the time delay can be accurately restrained, and further the force information or the position information which is more attached to the real situation can be predicted. However, in the process of transmitting data by using a network, the network time delay is changed constantly, in this case, the fitting degree of the force information or the position information predicted by the kalman filter and the actual force information or the position information is poor, so that the physical engine is easy to be unable to simulate scenes such as collision of two virtual objects better, and the user experience is reduced.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview, and is intended to neither identify key/critical elements nor delineate the scope of such embodiments, but is intended as a prelude to the more detailed description that follows.
The embodiment of the application provides a data transmission method and device, a control method and device of an online robot and the online robot, so as to solve the technical problem that the predicted force information or position information and the actual force information or position information are poor in fitting degree.
In some embodiments, a data transmission method includes: obtaining a first measured value of a parameter in a previous data transmission period and a corresponding first moment; obtaining a second measured value of the parameter in the current data transmission period and a corresponding second moment; determining a change rate of a parameter according to the first measured value, the first moment, the second measured value and the second moment; determining a current measured value received at the current moment according to the change rate of the parameter, the current moment, the first moment and the first measured value; or determining a current measured value received at the current moment according to the change rate of the parameter, the current moment, the second moment and the second measured value; and transmitting the current measured value to a prediction controller to obtain a current true value corresponding to the current measured value output by the prediction controller.
Optionally, the current measurement is determined as follows:
p(t c )=p 1 +R·(t c -t 1 )
or
p(t c )=p 2 +R·(t c -t 2 )
wherein ,p(tc ) T is the current measurement value 1 For the first moment, p 1 As a first measurement value, t 2 At the second moment, t c For the current time, p 2 R is the rate of change of the parameter for the second measurement.
Optionally, transmitting the current measured value to a prediction controller, to obtain a current true value corresponding to the current measured value output by the prediction controller, including: inputting the current measured value and the last real parameter value obtained by a Kalman filter in the last data transmission period into the Kalman filter; and obtaining a current true value output by the Kalman filter, wherein the current true value corresponds to the last true parameter value and the current measured value.
In some embodiments, a control method of an online robot includes: the first physical engine transmits first motion information of a first robot to the second physical engine through the data transmission method provided by the foregoing embodiment, wherein the first physical engine simulates a first virtual object corresponding to the first robot in a first virtual environment, and the first virtual object in the first virtual environment is synchronous with a stress state and a motion state of the first robot; the second physical engine simulates the first virtual object in a second virtual environment according to the first motion information.
Optionally, the first physical engine transmits the first motion information and first force information of the first robot to the second physical engine; the second physical engine simulates a motion state of the first virtual object in the second virtual environment according to the first motion information and the first force information.
Optionally, the first virtual object in the first virtual environment is synchronized with a stress state and a motion state of the first robot, including: obtaining a position difference value between a virtual position of the first virtual object in the first virtual environment and a real position of a first robot; obtaining a virtual speed of the first virtual object in the first virtual environment; wherein the virtual position and the virtual speed are obtained by the first physical engine calculating the motion state of the first virtual object according to first force information received by the first robot; obtaining a first product of the position difference value and a first set coefficient; determining a desired speed from a first sum of the first product and the virtual speed; and controlling the first robot according to the expected speed.
Optionally, determining the desired speed from the first sum of the first product and the virtual speed comprises: obtaining first force information to which the first robot is subjected; obtaining a second product of the first force information and a second set coefficient; the desired speed is determined from the first product, the second product, and a second sum of the virtual speeds.
In some embodiments, a data transmission apparatus includes a processor and a memory storing program instructions, the processor being configured to perform the data transmission method provided by the foregoing embodiments when the program instructions are executed.
In some embodiments, a control device for an online robot includes a processor and a memory storing program instructions, the processor being configured to execute the control device for an online robot provided in the previous embodiments when executing the program instructions.
In some embodiments, the online robot includes: the data transmission device provided in the foregoing embodiment, or the control device for an online robot provided in the foregoing embodiment.
The data transmission method and device, the control method and device of the online robot and the online robot provided by the embodiment of the application can realize the following technical effects:
The method comprises the steps of marking a time stamp for a measured value of a parameter in each parameter transmission period, and because the time delay is variable, the time when the measured value of the parameter is received cannot be expected, utilizing the current time and the time stamp of each measured value of the parameter to preliminarily adjust the measured value of the parameter in the current data transmission period to obtain the current measured value, so that the influence of the variable time delay on the parameter is achieved, and then inputting the obtained current measured value to a prediction controller, the adverse influence of the variable time delay on the process of predicting the current true value by the prediction controller based on the current measured value is reduced, the current true value obtained by the prediction controller is enabled to be more consistent with the actual value, a physical engine can better simulate the collision and other scenes of two virtual objects, and user experience is improved.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which:
FIG. 1 is a schematic diagram of an online robot according to an embodiment of the present disclosure;
Fig. 2 is a schematic diagram of a data transmission method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a control method of an online robot according to an embodiment of the present application;
fig. 4 is a schematic diagram of a data transmission device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a control device of an online robot according to an embodiment of the present application.
Detailed Description
For a more complete understanding of the features and technical content of the embodiments of the present application, reference should be made to the following detailed description of the embodiments of the present application, taken in conjunction with the accompanying drawings, which are for purposes of illustration only and not intended to limit the embodiments of the present application. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may still be practiced without these details. In other instances, well-known structures and devices may be shown simplified in order to simplify the drawing.
The terms first, second and the like in the description and in the claims of the embodiments and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe embodiments of the present application described herein. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion.
The term "plurality" means two or more, unless otherwise indicated.
In the embodiment of the present application, the character "/" indicates that the front and rear objects are an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes an object, meaning that there may be three relationships. For example, a and/or B, represent: a or B, or, A and B.
A physical engine can be seen as a set of operational rules, each conforming to newton's law of three, that calculate motion, rotation and collision reactions by imparting real physical properties to rigid objects, in which the rules of motion and interaction of various objects in the real world can be simulated. A virtual environment is built in advance in a physical engine, and a virtual object is built in the virtual environment. The physical engines may be Havok, novodeX, bullet, ODE, TOKMAK, newton, simple Physics Engine, etc., although the above list is merely illustrative of physical engines, and other physical engines in the prior art than those listed above are also suitable for use in the present application.
The physical engine may simulate virtual environments of various scenarios, with configuration parameters of different virtual environments being different, the configuration parameters being used to determine properties of objects in the virtual environment, including objects in the virtual environment: physical properties, material properties, geometrical properties, and connection relationships between objects. Wherein, the physical attribute represents the quality, position, rotation angle, speed, damping and other properties of the object in the virtual environment; the material properties represent material properties of objects in the virtual environment, such as density, coefficient of friction, coefficient of restitution, etc.; the geometric attributes represent the geometry of objects in the virtual environment; the connection relationship between the objects represents the association relationship between the objects in the virtual environment.
After simulating the virtual environment and the virtual object, the physical engine can calculate a virtual environment acting force of the virtual environment on the virtual object, where the virtual environment acting force may include: virtual gravity, virtual universal gravitation, virtual elasticity, virtual friction, virtual molecular force, virtual electromagnetic force, virtual nuclear force and the like; depending on the effect of the force, the virtual environment forces may include: virtual tension, virtual pressure, virtual supporting force, virtual power, virtual resistance, virtual centripetal force, virtual restoring force and the like; depending on the effect of the force, the virtual environment forces may include: virtual contact force and virtual non-contact force; depending on the interaction of the forces, the virtual environment forces may include: virtual stress interaction force, virtual electromagnetic interaction force, virtual strong interaction force, and virtual weak interaction force.
Depending on the particular virtual environment, the virtual environment forces in the present application may be a resultant of any one or more of the forces described above.
Fig. 1 is a schematic diagram of an online robot according to an embodiment of the present application. Referring to fig. 1, the on-line robot includes a first robot 11 and a second robot 13, where there is interaction between first force information and first motion information between the first robot 11 and the first physical engine 12; there is an interaction of the second force information with the second motion information between the second robot 13 and the second physical engine 14; there is an interaction of the first force information, the first motion information, the second force information, and the second motion information between the first physics engine 12 and the second physics engine 14; the first physical engine 12 simulates a first virtual object corresponding to the first robot 11 in the first virtual environment according to the first force information or the first position information, and simulates a second virtual object corresponding to the second robot 13 in the first virtual environment according to the second force information or the second position information; the second physical engine 14 simulates a first virtual object corresponding to the first robot 11 in the second virtual environment according to the first force information or the first position information, and simulates a second virtual object corresponding to the second robot 13 in the second virtual environment according to the second force information or the second position information.
The first virtual environment and the second virtual environment are the same virtual environment, and the configuration parameters of the first virtual environment and the second virtual environment are the same.
It should be understood that the above embodiments are only exemplified by the transmission of force information and position information between two physical engines, and in a specific application scenario, three or more physical engines and robots may exist.
In the above embodiment, only one robot corresponds to one physical engine, and in a specific application scenario, it may also be one physical engine corresponding to two or more robots, where the physical engine simulates virtual objects corresponding to the two or more robots, and transmits force information and/or position information of the two or more robots to other physical engines.
In the above embodiments, only direct communication between the physical engines is taken as an example for illustration, and in a specific application scenario, each physical engine may also be in communication with a server, where multiple physical engines transmit force information and/or location information through the server.
Fig. 2 is a schematic diagram of a data transmission method according to an embodiment of the present application. The data transmission method may be applied when the first physical engine shown in fig. 1 receives parameters transmitted by the second physical engine; alternatively, the data transmission method is applied when the second physical engine shown in fig. 1 receives the parameters transmitted by the first physical engine; or when the physical engine receives the data transmitted by the server, the data transmission method is applied; or when the server receives the data transmitted by the physical engine, the data transmission method is applied. The application scenario is only exemplary, and the application scenario of the data transmission method is not limited, and the data transmission method can be applied to various scenarios with variable time delay in the data transmission process, and the physical mode of data transmission can be through a network, infrared, bluetooth, zigBee or the like.
Referring to fig. 2, the data transmission method includes:
s201, obtaining a first measured value of a parameter in a previous data transmission period and a first moment corresponding to the first measured value.
This is illustrated here in the scenario shown in fig. 1. In the case that the second physical engine receives the parameter transmitted by the first physical engine, the parameter may be the first force information, may be the first movement information, such as the first acceleration, the first speed, the first position, etc., and may also include both the first force information and the first movement information.
The first measurement value may be a measurement value of the first robot, and may be a measurement value of a first virtual object simulated by the first physical engine in the first virtual environment. For example, when the first robot transmits first force information (which can be obtained by a force sensor provided in the first robot) to the first physical engine, the first physical engine simulates first motion information of the first virtual object based on the first force information, and feeds the first motion information back to the first robot, and then controls the first robot based on the first motion information, the first measurement value of the parameter may be the first force information of the first robot (measurement value of the first robot), or may be the first motion information (measurement value of the first virtual object).
When the first robot transmits first motion information of the first robot to the first physical engine, the first physical engine simulates first force information of the first virtual object according to the first motion information, and feeds the first force information back to the first robot, and then controls the first robot according to the first force information, the first measured value of the parameter may be the first motion information of the first robot (measured value of the first robot), or may be the first force information (measured value of the first virtual object).
The data transmission is usually performed periodically, for example, the first physical engine periodically transmits the parameters to the second physical engine, and the above-mentioned previous data transmission period may be understood as a process that the first physical engine last transmits the parameters to the second physical engine.
The first time refers to a time when the first measured value is obtained, for example, a time when the first force of the first robot is detected by the force sensor, a time when the first motion information of the first virtual object is calculated by the physical engine, a time when the first motion information of the first robot is detected by the sensor, or a time when the first force information of the first virtual object is calculated by the physical engine.
The above process of receiving the first physical engine transmission parameter by only the second physical engine is described in an exemplary manner, and in other scenarios, for example, a scenario where the first physical engine receives the second physical engine transmission parameter, a scenario where the server transmits the parameter to the first physical engine, a scenario where the first physical engine transmits the parameter to the server, and those skilled in the art may determine the first measurement value of the specific parameter and the first moment corresponding to the first measurement value according to the foregoing examples according to the actual situation.
S202, obtaining a second measured value of the parameter in the current data transmission period and a corresponding second moment.
The second measurement value is of the same type of parameter as the first measurement value, and differs from the first measurement value in the time of acquisition. For the parameter type of the second measurement value, reference may be made to the foregoing definition of the first measurement value, which is not described in detail herein.
S203, determining the change rate of the parameter according to the first measured value, the first moment, the second measured value and the second moment.
The rate of change of the parameter may be determined from the first measurement, the first time, the second measurement, and the second time using a linear fit.
For example, the rate of change of the parameter may be determined by:
Figure BDA0003289732470000081
/>
r is the rate of change of the parameter, p 1 As a first measurement value, t 1 For the first moment, p 2 As the second measurement value, t 2 Is the second moment.
In the embodiment of the present application, the parameter is only measured twice, and in a specific application, the rate of change of the parameter may also be obtained by measuring the parameter multiple times, for example, obtaining a third measured value of the parameter in the first two data periods and a third time corresponding to the third measured value, and performing linear fitting on the third measured value and the third time, the first measured value and the first time, and the second measured value and the second time, so as to determine the rate of change of the parameter.
S204, determining a current measured value received at the current moment according to the change rate of the parameter, the current moment, the first moment and the first measured value; or determining the current measured value received at the current moment according to the change rate of the parameter, the current moment, the second moment and the second measured value.
Because of variable time delay in the data transmission process, an unexpected time interval exists between the current time and the second time, the second measured value is time stamped (the second time), and then the current measured value can be preliminarily predicted through the second measured value, the change rate of the parameters and the time interval between the current time and the second time. The current measurement thus obtained may reduce the adverse effect of varying delays on the data.
For example, the current measurement may be determined as follows:
p(t c )=p 2 +R·(t c -t 2 )
wherein ,p(tc ) T is the current measurement value c T is the current time, t 2 For the second time, p 2 R is the rate of change of the parameter for the second measurement.
Because of variable time delay in the data transmission process, an unexpected time interval exists between the current time and the first time, the first measured value is time stamped (the first time), and then the current measured value can be preliminarily predicted through the first measured value, the change rate of the parameters and the time interval between the current time and the first time. The current measurement thus obtained may reduce the adverse effect of varying delays on the data.
For example, the current measurement may be determined as follows:
p(t c )=p 1 +R·(t c -t 1 )
wherein ,p(tc ) T is the current measurement value c T is the current time, t 1 For the first moment, p 1 R is the rate of change of the parameter for the first measurement.
S205, transmitting the current measured value to a prediction controller to obtain a current true value corresponding to the current measured value output by the prediction controller.
The predictive controller here acts to recover the real data in the event of a data break or noise in the data. The prediction controller may be a Kalman filter (Kalman filter), may be an algorithm with a function of restoring real data, such as synchronous positioning and map construction (Simultaneous Localization And Mapping, SLAM), and the specific algorithm of the prediction controller is not limited in the embodiment of the present application, and a person skilled in the art may select an appropriate prediction controller from existing algorithms according to actual requirements.
For clarity of description, only a Kalman filter is exemplified herein as the predictive controller. Transmitting the current measured value to the prediction controller to obtain a current true value corresponding to the current measured value output by the prediction controller, which may include: inputting the current measured value and the last real parameter value obtained by the Kalman filter in the last data transmission period into the Kalman filter; the current real value of the Kalman filter output corresponding to the last real parameter value and the current measured value is obtained.
The method comprises the steps of marking a time stamp for a measured value of a parameter in each parameter transmission period, and because the time delay is variable, the time when the measured value of the parameter is received cannot be expected, utilizing the current time and the time stamp of each measured value of the parameter to preliminarily adjust the measured value of the parameter in the current data transmission period to obtain the current measured value, so that the influence of the variable time delay on the parameter is achieved, and then inputting the obtained current measured value to a prediction controller, the adverse influence of the variable time delay on the process of predicting the current true value by the prediction controller based on the current measured value is reduced, the current true value obtained by the prediction controller is enabled to be more consistent with the actual value, a physical engine can better simulate the collision and other scenes of two virtual objects, and user experience is improved.
Fig. 3 is a schematic diagram of a control method of an online robot according to an embodiment of the present application. Referring to fig. 3, the control method of the online robot includes:
s301, the first physical engine transmits first motion information of the first robot to the second physical engine.
The first physical engine transmits the first motion information of the first robot to the second physical engine through the data transmission method provided in the foregoing embodiment. The first physical engine simulates a first virtual object corresponding to the first robot in a first virtual environment, and the first virtual object in the first virtual environment is synchronous with the stress state and the motion state of the first robot.
S302, the second physical engine simulates a first virtual object in the second virtual environment according to the first motion information.
In this way, by adopting the data transmission method provided by the foregoing embodiment, the first motion information received by the second physical engine is closer to the actual value, and the first virtual object simulated by the second physical engine in the second virtual object is more attached to the motion state of the first robot, so that better use experience can be provided for the user using the second robot corresponding to the second physical engine.
Of course, the second physical engine also transmits the first motion information of the second robot to the first physical engine through the data transmission method provided in the foregoing embodiment, and the second physical engine simulates a second virtual object corresponding to the second robot in the first virtual environment, where the second virtual object is synchronized with the stress state and the motion state of the second robot. After the first physical engine receives the first motion information, a second virtual object corresponding to the second pair of robots is simulated in the first virtual environment, so that better use experience can be provided for users using the first robots.
In order for the second physical engine to more accurately simulate the first virtual object in the second virtual environment, the first physical engine may transmit first motion information and first force information of the first robot to the second robot, and the second physical engine simulates a motion state of the first virtual object in the second virtual environment according to the first motion information and the first force information; in order for the first physical engine to more accurately simulate the second virtual object in the first virtual environment, the second physical engine may transmit second motion information and second force information to the first physical engine, and the first physical engine simulates a motion state of the second virtual object in the first virtual environment according to the second motion information and the second force information.
In a specific application, the configuration parameters of the first virtual environment and the second virtual environment are the same, and the user using the first robot and the user using the second robot interact in the same configured virtual environment.
In the above embodiment, only one robot corresponds to one physical engine, and one physical engine simulates virtual objects corresponding to two robots in one virtual environment for exemplary illustration, and in a specific application scenario, one physical engine may also correspond to more than two robots, and the physical engine simulates virtual objects corresponding to more than two robots; in a specific application scenario, the system can also be more than three physical engines, and each physical engine simulates not only a virtual object corresponding to a robot associated with a current physical engine, but also virtual objects corresponding to robots associated with other physical engines in a virtual environment.
The first virtual object calculates first motion information of the first virtual object in the first virtual environment according to the first force information, feeds the first motion information back to the first robot, and synchronizes the stress state and the motion state of the first robot according to the first motion information under the condition that the first robot transmits the first force information to the first virtual object, wherein the first virtual object in the first virtual environment is synchronous with the stress state and the motion state of the first robot, and the method comprises the following steps: obtaining a position difference value between a virtual position of a first virtual object in a first virtual environment and a real position of a first robot; obtaining a virtual speed of a first virtual object in a first virtual environment; wherein the virtual position and the virtual speed are obtained by calculating the motion state of the first virtual object by the first physical engine according to the first force information received by the first robot; obtaining a first product of the position difference value and a first set coefficient; determining a desired speed from a first sum of the first product and the virtual speed; the first robot is controlled according to the desired speed.
The position difference value is fed forward to the virtual speed through a first set coefficient so as to obtain a desired speed, the desired speed is directly related to the virtual speed, the first robot cannot deviate from the desired speed too much in the process of following the desired speed, and a user can obtain better touch experience; meanwhile, the expected speed is directly related to the position difference value, so that the position deviation between the real position of the robot and the virtual position of the virtual object can be restrained in the process of following the expected speed by the robot, and the use experience of a user is further improved.
The first setting coefficient may be a coefficient smaller than 1, so that the influence of the position difference on the desired speed is smaller than the influence of the virtual speed on the desired speed, that is, the virtual speed following the first virtual object is still used as the main following target in the process that the first robot follows the first virtual object in a manner of following the desired speed, and the speed difference between the real speed of the first robot and the virtual speed of the first virtual object is reduced in the process of inhibiting the position difference between the real position of the first robot and the virtual position of the first virtual object, so that the haptic experience of the user in the process of using the first robot is improved.
Still further, the first setting coefficient may be determined by: a first set coefficient inversely related to the position difference is obtained. By adopting the technical scheme, under the condition that the position difference between the real position of the first robot and the virtual position of the first virtual object is larger, the first setting coefficient is smaller to reduce the value of the first product, and then the first sum of the first product and the virtual speed is determined to be the expected speed, so that the influence of the larger position difference on the expected speed can be reduced, the first robot can better follow the virtual speed of the first virtual object, and the haptic experience of a user in the process of using the first robot is improved; in addition, when the position difference between the real position of the first robot and the virtual position of the first virtual object is smaller, the first setting coefficient is larger, the value of the first product is increased, and then the first sum of the first product and the virtual speed is determined to be the desired speed, so that the influence of the smaller position difference on the desired speed can be increased, and the first robot can better follow the virtual position of the first virtual object.
The first physical engine calculating a motion state of the first virtual object according to the first force information applied by the first robot may include: obtaining configuration parameters of a first virtual environment created by a first physical engine; determining virtual environment acting force of the first virtual environment on the first virtual object according to the configuration parameters; obtaining resultant force of virtual environment acting force and first force information; a virtual position and a virtual speed are determined based on the resultant force and the current motion state of the first virtual object.
The first physical engine determines the virtual acting force of the first virtual environment on the first virtual object according to the configuration parameters, and the calculation process of the first physical engine follows Newton's law of motion in the process of determining the virtual position and the virtual speed according to the resultant force and the current motion state of the first virtual object.
Through the above process, the virtual position and the virtual speed of the first virtual object in the first virtual environment can be obtained. And then controlling the first robot according to the virtual position and the virtual speed, so that the first robot follows the virtual position and the virtual speed under the action of the driving force and the first force information.
In addition, in order to determine the desired speed, the following scheme may be adopted: obtaining first force information applied by a first robot; obtaining a second product of the first force information and the second set coefficient; the desired speed is determined based on the first product, the second product, and a second sum of the virtual speeds.
The force sensor provided on the first robot, for example, a three-dimensional force sensor, may be used to obtain first force information applied to the first robot by the user, where the first force information applied to the first robot may be a force applied to the first robot by the user during the use of the first robot.
In some application scenarios, there is also a variable time delay between the first robot and the first physical engine, for example, the first robot communicates with the first physical engine in a wireless manner, and is limited by signal strength, and there is a variable time delay between the first robot and the first physical engine. The wireless communication means herein include, but are not limited to, wiFi, bluetooth, infrared, etc.
In the information interaction process of the first robot and the first physical engine, the first robot transmits the received first force information to the first physical engine, the first physical engine simulates the first motion state of the first virtual object according to the first force information, the first physical engine feeds the first motion state back to the first robot, and then the first robot follows the first motion state of the first virtual object, namely, the first force information received by the robot acts on the robot again through a cycle, and the time delay in the information transmission process exists in the cycle.
The second product of the first force information and the second set coefficient is directly fed forward to the virtual speed to obtain the expected speed, namely, the real-time force information is directly acted on the first robot, so that the first robot can move along with the first force information to a certain extent, the influence degree of the virtual position (position difference) with variable time delay or the virtual speed in the expected speed is reduced, finally, the first robot is controlled at the expected speed, and the shake of the first robot is reduced.
The second setting coefficient may represent a degree of influence of the first force information on the desired speed, and the larger the second setting coefficient is, the larger the influence of the first force information on the desired speed is, and the larger the influence of the first force information on the actual speed of the first robot is; the smaller the second setting coefficient, the smaller the influence of the first force information on the desired speed, and the smaller the influence on the actual speed of the first robot. The second setting coefficients can be obtained through a test mode, for example, a plurality of different second setting coefficients are set, each second setting coefficient is used for testing the first robot-first physical engine, if the following effect of the first robot on the first virtual object is not good, the second setting coefficients are too large, and the second setting coefficients need to be reduced; if the first force information received by the first robot changes, which results in a larger shake of the first robot, it is indicated that the second setting coefficient is too small, and the second setting coefficient needs to be increased. In the test process, a second setting coefficient which meets the requirements of the following effect and the stability (small shake of the robot) of the first robot is finally determined.
Further, determining the desired speed based on the first product, the second product, and the second sum of the virtual speeds may include: and under the condition that variable time delay exists in the process of obtaining the virtual speed, determining the sum of the second product, the first product and the first average value of the plurality of virtual speeds in the first set time period as the expected speed.
Alternatively, determining the desired speed from the first product, the second product, and the virtual speed may include: and under the condition that variable time delay exists in the process of obtaining the virtual position, determining the sum of the second product, the second average value of the first products in the second set time period and the virtual speed as the expected speed.
Alternatively, determining the desired speed from the first product, the second product, and the virtual speed may include: and under the condition that variable time delay exists in the process of obtaining the virtual speed and the virtual position, determining the sum of the second product, the second average value of the plurality of first products in the second set time period and the first average value of the plurality of virtual speeds in the first set time period as the expected speed.
The first set duration, the second set duration, and the third set duration may be the same or different. The larger the first set time length, the second set time length or the third set time length is, the worse the following effect of the first robot on the first virtual object is, the less the shake (the better the stability) of the first robot is; the smaller the first set period, the second set period, or the third set period, the better the following effect of the first robot on the first virtual object, but the larger the shake (the worse the stability) of the first robot. The specific values of the first set duration, the second set duration and the third set duration are not limited, and a person skilled in the art can determine the appropriate first set duration, second set duration or third set duration in a limited test manner according to the parameter requirements of the first robot.
In the technical scheme, the following effect of the first robot on the virtual speed and the virtual position is reduced due to the calculation of the average value, but the first robot can timely follow the first force information born by the robot due to the existence of the second product of the first force information born by the robot and the second set coefficient, so that the following effect of the first robot on the virtual speed is improved. After the first set duration, the second set duration or the third set duration are determined, the second set coefficient can be adjusted to improve the following effect of the first robot on the first virtual object, so that the following effect of the first robot on the first virtual object is maintained, and the stability of the first robot can be improved (the shake of the robot is reduced).
In the above embodiment, only the first robot and the first physical engine are taken as examples, and a manner in which the first virtual object in the first virtual environment and the first robot achieve the stress state and the motion state are synchronized is exemplarily described, and for the manner in which the second physical engine makes the second virtual object in the second virtual environment and the second robot achieve the stress state and the motion state are synchronized, the foregoing manner will not be repeated herein.
In some embodiments, the data transmission apparatus includes a processor and a memory storing program instructions, the processor being configured to perform the data transmission method provided by the foregoing embodiments when the program instructions are executed.
In some embodiments, a control device for an online robot includes a processor and a memory storing program instructions, the processor being configured to execute the control method for an online robot provided in the foregoing embodiments when executing the program instructions.
Fig. 4 is a schematic diagram of a data transmission device according to an embodiment of the present application. As shown in fig. 4, the data transmission apparatus includes:
a processor (processor) 41 and a memory (memory) 42, and may also include a communication interface (Communication Interface) 43 and a bus 44. The processor 41, the communication interface 43 and the memory 42 may communicate with each other via a bus 44. The communication interface 43 may be used for information transmission. The processor 41 may call logic instructions in the memory 42 to perform the data transmission method provided by the foregoing embodiment.
Further, the logic instructions in the memory 42 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 42 is a computer readable storage medium that can be used to store a software program, a computer executable program, such as program instructions/modules corresponding to the methods in the embodiments of the present application. The processor 41 executes functional applications and data processing by running software programs, instructions and modules stored in the memory 42, i.e. implements the methods of the method embodiments described above.
Memory 42 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the terminal device, etc. In addition, memory 42 may include high-speed random access memory, and may also include non-volatile memory.
Fig. 5 is a schematic diagram of a control device of an online robot according to an embodiment of the present application. Referring to fig. 5, the control device for an on-line robot includes:
a processor (processor) 51 and a memory (memory) 52, and may also include a communication interface (Communication Interface) 53 and a bus 54. The processor 51, the communication interface 53, and the memory 52 may communicate with each other via the bus 54. The communication interface 53 may be used for information transfer. The processor 51 may call logic instructions in the memory 52 to perform the control method of the on-line robot provided by the previous embodiment.
Further, the logic instructions in the memory 52 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 52 is a computer readable storage medium that can be used to store a software program, a computer executable program, and program instructions/modules corresponding to the methods in the embodiments of the present application. The processor 51 executes functional applications and data processing by running software programs, instructions and modules stored in the memory 52, i.e. implements the methods of the method embodiments described above.
Memory 52 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the terminal device, etc. In addition, the memory 52 may include high-speed random access memory, and may also include nonvolatile memory.
The embodiment of the application provides an online robot, which comprises the data transmission device provided by the embodiment, or comprises the control device of the online robot provided by the embodiment.
The present embodiments provide a computer-readable storage medium storing computer-executable instructions configured to perform the data transmission method provided in the foregoing embodiments.
The present embodiments provide a computer-readable storage medium storing computer-executable instructions configured to perform the data transmission method provided in the foregoing embodiments.
The present application provides a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method for controlling an on-line robot provided by the previous embodiments.
The present application provides a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method for controlling an on-line robot provided by the previous embodiments.
The computer readable storage medium may be a transitory computer readable storage medium or a non-transitory computer readable storage medium.
The technical solutions of the embodiments of the present application may be embodied in the form of a software product, where the software product is stored in a storage medium, and includes one or more instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium may be a non-transitory storage medium including: a plurality of media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or a transitory storage medium.
The above description and the drawings illustrate embodiments of the present application sufficiently to enable those skilled in the art to practice them. Other embodiments may involve structural, logical, electrical, process, and other changes. The embodiments represent only possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in, or substituted for, those of others. Moreover, the terminology used in the present application is for the purpose of describing embodiments only and is not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a," "an," and "the" (the) are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when used in this application, the terms "comprises," "comprising," and/or "includes," and variations thereof, mean that the stated features, integers, steps, operations, elements, and/or components are present, but that the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method or apparatus comprising such elements. In this context, each embodiment may be described with emphasis on the differences from the other embodiments, and the same similar parts between the various embodiments may be referred to each other. For the methods, products, etc. disclosed in the embodiments, if they correspond to the method sections disclosed in the embodiments, the description of the method sections may be referred to for relevance.
Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. The skilled person may use different methods for each particular application to achieve the described functionality, but such implementation should not be considered to be beyond the scope of the embodiments of the present application. It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the embodiments disclosed herein, the disclosed methods, articles of manufacture (including but not limited to devices, apparatuses, etc.) may be practiced in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements may be merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form. The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to implement the present embodiment. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (9)

1. A data transmission method, comprising:
obtaining a first measured value of a parameter in a previous data transmission period and a corresponding first moment;
Obtaining a second measured value of the parameter in the current data transmission period and a corresponding second moment;
determining a change rate of a parameter according to the first measured value, the first moment, the second measured value and the second moment;
determining a current measured value received at the current moment according to the change rate of the parameter, the current moment, the first moment and the first measured value; or determining a current measured value received at the current moment according to the change rate of the parameter, the current moment, the second moment and the second measured value;
inputting the current measured value and the last real parameter value obtained by a Kalman filter in the last data transmission period into the Kalman filter;
and obtaining a current true value output by the Kalman filter, wherein the current true value corresponds to the last true parameter value and the current measured value.
2. The data transmission method according to claim 1, wherein the current measurement value is determined as follows:
p(t c )=p 1 +R·(t c -t 1
or
p(t c )=p 2 +R·(t c -t 2
wherein ,p(t c ) As a result of the current measurement value,t 1 for the first moment of time of day,p 1 for the first measurementThe magnitude of the force is calculated,t 2 for the second moment of time, the first moment, t c For the current moment of time,p 2 as a result of the second measurement value,Ris the rate of change of the parameter.
3. A method for controlling an online robot, comprising:
a first physical engine transmits first motion information of a first robot to a second physical engine through the data transmission method of claim 1 or 2, wherein the first physical engine simulates a first virtual object corresponding to the first robot in a first virtual environment, and the first virtual object in the first virtual environment is synchronous with a stress state and a motion state of the first robot;
the second physical engine simulates the first virtual object in a second virtual environment according to the first motion information.
4. The control method according to claim 3, wherein,
the first physical engine transmitting the first motion information and first force information of the first robot to the second physical engine;
the second physical engine simulates a motion state of the first virtual object in the second virtual environment according to the first motion information and the first force information.
5. The control method according to claim 3 or 4, characterized in that the first virtual object in the first virtual environment is synchronized with a stress state and a motion state of the first robot, comprising:
Obtaining a position difference value between a virtual position of the first virtual object in the first virtual environment and a real position of a first robot;
obtaining a virtual speed of the first virtual object in the first virtual environment; wherein the virtual position and the virtual speed are obtained by the first physical engine calculating the motion state of the first virtual object according to first force information received by the first robot;
obtaining a first product of the position difference value and a first set coefficient;
determining a desired speed from a first sum of the first product and the virtual speed;
and controlling the first robot according to the expected speed.
6. The control method of claim 5, wherein determining a desired speed from the first sum of the first product and the virtual speed comprises:
obtaining first force information to which the first robot is subjected;
obtaining a second product of the first force information and a second set coefficient;
the desired speed is determined from the first product, the second product, and a second sum of the virtual speeds.
7. A data transmission apparatus comprising a processor and a memory storing program instructions, wherein the processor is configured to perform the data transmission method of claim 1 or 2 when executing the program instructions.
8. A control device of an on-line robot comprising a processor and a memory storing program instructions, characterized in that the processor is configured to execute the control method of an on-line robot according to any one of claims 3 to 6 when executing the program instructions.
9. An online robot, comprising:
the data transmission device according to claim 7, or the control device for an on-line robot according to claim 8.
CN202111160198.6A 2021-09-30 2021-09-30 Data transmission method and device, online robot control method and device, and online robot Active CN113843796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111160198.6A CN113843796B (en) 2021-09-30 2021-09-30 Data transmission method and device, online robot control method and device, and online robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111160198.6A CN113843796B (en) 2021-09-30 2021-09-30 Data transmission method and device, online robot control method and device, and online robot

Publications (2)

Publication Number Publication Date
CN113843796A CN113843796A (en) 2021-12-28
CN113843796B true CN113843796B (en) 2023-04-28

Family

ID=78977330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111160198.6A Active CN113843796B (en) 2021-09-30 2021-09-30 Data transmission method and device, online robot control method and device, and online robot

Country Status (1)

Country Link
CN (1) CN113843796B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9303579B2 (en) * 2012-08-01 2016-04-05 GM Global Technology Operations LLC System and method for monitoring a particulate filter in a vehicle exhaust aftertreatment device
CN104200125B (en) * 2014-09-23 2017-03-08 珠海格力电器股份有限公司 A kind of acquisition methods of drying predicted time, apparatus and system
JP5808510B1 (en) * 2015-06-02 2015-11-10 ソフトバンク株式会社 Prediction device and program
CN107030699B (en) * 2017-05-18 2020-03-10 广州视源电子科技股份有限公司 Pose error correction method and device, robot and storage medium
CN111038514B (en) * 2019-12-30 2021-10-08 潍柴动力股份有限公司 Vehicle speed control method and related device
CN113255902A (en) * 2020-02-11 2021-08-13 华为技术有限公司 Neural network circuit, system and method for controlling data flow
CN111338287A (en) * 2020-03-13 2020-06-26 南方科技大学 Robot motion control method, device and system, robot and storage medium
CN111251305B (en) * 2020-03-13 2023-02-07 南方科技大学 Robot force control method, device, system, robot and storage medium

Also Published As

Publication number Publication date
CN113843796A (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN108664122A (en) A kind of attitude prediction method and apparatus
CN113771043B (en) Control method and device for enabling robot to follow virtual object and rehabilitation robot
US11440183B2 (en) Hybrid machine learning-based systems and methods for training an object picking robot with real and simulated performance data
CN109064487B (en) Human body posture comparison method based on Kinect skeleton node position tracking
US10967505B1 (en) Determining robot inertial properties
Vozar et al. Driver modeling for teleoperation with time delay
CN113829347B (en) Robot control method and device based on physical engine and rehabilitation robot
CN111716361A (en) Robot control method and device and surface-surface contact model construction method
CN113843796B (en) Data transmission method and device, online robot control method and device, and online robot
Duysak et al. Efficient modelling and simulation of soft tissue deformation using mass-spring systems
JP4653101B2 (en) Haptic transmission system
CN114833826B (en) Control method and device for realizing collision touch sense of robot and rehabilitation robot
EP1376316A1 (en) Haptic communications
Smith et al. Adaptive teleoperation using neural network-based predictive control
CN113829348B (en) Robot control method and device based on physical engine and rehabilitation robot
CN109397284A (en) A kind of synchronisation control means of principal and subordinate's mechanical arm system containing unknown parameter
CN112632803A (en) Tracking control method and device, electronic equipment and storage medium
US9275488B2 (en) System and method for animating a body
CN110286760B (en) Force feedback control method and device for virtual reality
CN113855472B (en) Method and device for controlling exoskeleton robot and exoskeleton robot
CN114770511B (en) Robot control method and device based on physical touch sense and robot
CN116633983A (en) Communication method and device for multi-robot collaborative operation and communication middleware
CN113855475B (en) Method and device for controlling two rehabilitation robots and rehabilitation robot system
CN113855474B (en) Method and device for controlling two rehabilitation robots and rehabilitation robot system
JP2011212223A (en) Device for determining position in cutting simulation processing and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant