CN114918976B - Joint robot health state assessment method based on digital twinning technology - Google Patents

Joint robot health state assessment method based on digital twinning technology Download PDF

Info

Publication number
CN114918976B
CN114918976B CN202210686455.8A CN202210686455A CN114918976B CN 114918976 B CN114918976 B CN 114918976B CN 202210686455 A CN202210686455 A CN 202210686455A CN 114918976 B CN114918976 B CN 114918976B
Authority
CN
China
Prior art keywords
joint
joint robot
motor
robot
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210686455.8A
Other languages
Chinese (zh)
Other versions
CN114918976A (en
Inventor
于艺春
余丹
兰雨晴
王丹星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Standard Intelligent Security Technology Co Ltd
Original Assignee
China Standard Intelligent Security Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Standard Intelligent Security Technology Co Ltd filed Critical China Standard Intelligent Security Technology Co Ltd
Priority to CN202210686455.8A priority Critical patent/CN114918976B/en
Publication of CN114918976A publication Critical patent/CN114918976A/en
Application granted granted Critical
Publication of CN114918976B publication Critical patent/CN114918976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/0095Means or methods for testing manipulators

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a joint robot health state assessment method based on a digital twin technology, wherein a sensor is arranged in each joint motor of a joint robot to detect the action data of the joint motor, the action data is analyzed and processed, and the motor loss data of the joint robot is determined; analyzing and processing the motor loss data to obtain the current motor loss state information of the joint robot; judging whether the joint robot is in a motor excessive loss state at present according to the motor loss state information; according to the method, by means of a digital twinning technology, the running state of the joint robot is visually displayed, a corresponding digital twinning model is constructed, historical action data of a joint motor of the joint robot are mined and analyzed, health state assessment of the joint robot is achieved, waste of overhaul resources is reduced, and unplanned shutdown events of the joint robot are effectively avoided.

Description

Joint robot health state assessment method based on digital twinning technology
Technical Field
The invention relates to the technical field of robot control, in particular to a joint robot health state assessment method based on a digital twin technology.
Background
The joint robot is widely applied to the manufacturing industry, is used as an automatic industrial production tool, and has the advantages of strong practicability, high efficiency, stability, high precision and the like. In actual work, in order to ensure normal and continuous work of the joint robot, inspection and maintenance of the joint robot are required. The prior art mainly comprises two modes of visual maintenance and regular maintenance, wherein the visual maintenance can be carried out only after a joint robot breaks down, the potential risk of the joint robot cannot be predicted, and the regular maintenance has the problem of untimely or unnecessary maintenance.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a joint robot health state assessment method based on a digital twinning technology, wherein a sensor is arranged in each joint motor of a joint robot to detect the action data of the joint motor, the action data is analyzed and processed, and the motor loss data of the joint robot is determined; analyzing and processing the motor loss data to obtain the current motor loss state information of the joint robot; judging whether the joint robot is in a motor excessive loss state at present according to the motor loss state information; according to the method, by means of a digital twinning technology, the running state of the joint robot is visually displayed, a corresponding digital twinning model is constructed, historical action data of a joint motor of the joint robot are mined and analyzed, health state assessment of the joint robot is achieved, waste of overhaul resources is reduced, and unplanned shutdown events of the joint robot are effectively avoided.
The invention provides a joint robot health state assessment method based on a digital twinning technology, which comprises the following steps:
step S1, instructing a joint robot to move, and collecting a motion image of the joint robot; analyzing and processing the motion image, and determining whether the current actual motion posture of the joint robot is matched with the expected motion posture;
s2, when the actual action posture is not matched with the expected action posture, indicating a motion sensor in the joint robot to collect action data of the joint robot in the action process; analyzing and processing the action data to determine motor loss data of the joint robot;
s3, analyzing and processing the motor loss data to obtain current motor loss state information of the joint robot; judging whether the joint robot is in a motor excessive loss state at present or not according to the motor loss state information;
and S4, adjusting the working state of the corresponding joint motor according to the judgment result of the joint robot.
Further, in the step S1, instructing the joint robot to move and acquiring a motion image of the joint robot specifically includes:
sending a motion instruction to a joint motor of the joint robot to drive the corresponding joint motor to operate, so that the joint robot moves; and meanwhile, shooting the motion process of the joint robot so as to obtain a motion image of the joint robot.
Further, in step S1, analyzing the motion image, and determining whether the current actual motion posture of the joint robot matches the expected motion posture specifically includes:
sequentially extracting a plurality of action picture frames from the action image, and identifying and obtaining the current action amplitude and action direction of a manipulator of the joint robot from each action picture frame to be used as an actual action posture;
comparing the action amplitude with the action amplitude corresponding to the expected action posture, and determining an action amplitude deviation value between the action amplitude and the expected action posture;
comparing the action direction with an action direction corresponding to the expected action posture, and determining an action direction deviation angle value between the action direction and the action direction;
if the action amplitude deviation value is greater than or equal to a preset amplitude deviation threshold value, or the action direction deviation angle value is greater than or equal to a preset deviation angle threshold value, determining that the actual action attitude is not matched with the expected action attitude; otherwise, determining that the actual motion gesture matches the expected motion gesture.
Further, in the step S2, when the actual motion posture is not matched with the expected motion posture, a motion sensor inside the joint robot is instructed to acquire motion data of the joint robot in the motion process; analyzing and processing the motion data, and determining the motor loss data of the joint robot specifically comprises the following steps:
each joint motor of the joint robot is provided with a torque sensor, and when the actual action posture is not matched with the expected action posture, each torque sensor is indicated to collect the rotation angle of each joint motor relative to a preset initial position in the process from the start of the joint robot to the current moment by a preset sampling frequency value to serve as the action data;
and analyzing and processing the motion data to determine the power loss of each joint motor of the joint robot.
Further, in step S2, the analyzing and processing the motion data and determining the power loss of each joint motor of the joint robot specifically includes:
obtaining the number of the joint motor which is in a working state simultaneously in the process from the start of the joint robot to the current moment according to a preset acquisition frequency value and the rotation angle of each joint motor relative to a preset initial position acquired by each torque sensor at each sampling moment by using the following formula (1),
Figure GDA0003863379920000031
in the above formula (1), E [ (t) 0 +k×T),i]Indicates the t-th time from the start of the operation of the articulated robot to the current time 0 At + kxT moment, the working state value of the ith joint motor of the joint robot; t is t 0 Indicating the time corresponding to the start of the joint robot; t represents the sampling period of the torque sensor, which is the reciprocal of the predetermined acquisition frequency value; k represents an integer variable having a value range of
Figure GDA0003863379920000032
t represents the current time;
Figure GDA0003863379920000033
represents a rounding down operation; theta [ (t) 0 +k×T),i]Indicates the t-th time in the process from the start of the operation of the joint robot to the current time 0 At the moment of + kxT, an angle value corresponding to the preset initial position of the ith joint motor of the joint robot; theta { [ t ] 0 +(k-1)×T]I represents the t-th time from the start of the operation of the articulated robot to the present time 0 At the moment of +/-k-1) multiplied by T, the rotation angle value of the ith joint motor of the joint robot relative to the preset initial position;
if E [ (t) 0 +k×T),i]=1, it indicates the t-th time in the process from the start of the operation of the articulated robot to the current time 0 At + kXT moment, the ith joint motor of the joint robot is in a working state;
if E [ (t) 0 +k×T),i]=0, it indicates the t-th time in the process from the start of the operation of the articulated robot to the current time 0 + k × TAt this moment, the ith joint motor of the joint robot is in a stop state;
then, the following formula (2) is utilized, according to the serial number of the joint motor in the working state, the total power of the joint robot acquired at each sampling moment and the torque value of the corresponding joint motor acquired by each torque sensor, the power loss value of each joint motor of the joint robot during the process from the start of the joint robot to the current moment is obtained,
Figure GDA0003863379920000041
in the above formula (2), p [ (t) 0 +k×T),i]Indicates the t-th time from the start of the operation of the articulated robot to the current time 0 At the moment of + kxT, the loss power value of the ith joint motor of the joint robot; r is a radical of hydrogen General (1) The total internal resistance value of the joint robot is represented; u represents the working voltage value of the joint robot; p (t) 0 + kxT) represents the T-th time period from the start of the operation of the joint robot to the current time 0 At + kXT, the total power value of the joint robot; g [ (t) 0 +k×T),i]Indicates the t-th time in the process from the start of the operation of the joint robot to the current time 0 At the moment + kxT, the torque value of the ith joint motor of the joint robot; n represents the total number of joint motors included in the joint robot.
Further, in step S3, analyzing and processing the motor loss data to obtain current motor loss state information of the joint robot specifically includes:
obtaining a loss line graph of each joint motor of the joint robot according to the loss power value of each joint motor by using the following formula (3),
Figure GDA0003863379920000042
in the above formula (3), h [ (t) 0 +k×T),i]In a loss line diagram showing the ith joint motor of a joint robotT on the time axis of the abscissa 0 If the denominator of the height value of the polyline point corresponding to the + kXT moment is zero in the calculation process, directly making h [ (T) 0 +k×T),i]= H; h represents the maximum display height of the loss profile;
Figure GDA0003863379920000051
the maximum value of the loss power values corresponding to the ith joint motor at each sampling moment in the process from the start of the joint robot to the current moment is represented;
Figure GDA0003863379920000052
and the minimum value of the loss power values of the ith joint motor at each sampling moment in the process from the start of the joint robot to the current moment is shown.
Further, in step S3, determining whether the joint robot is currently in the motor excessive loss state according to the motor loss state information specifically includes:
determining an average power loss value of each joint motor in the process from the start of the joint robot to the current moment according to the loss line graph of each joint motor;
if the number of the joint motors with the average power loss value larger than or equal to the preset power loss threshold value in the joint robot exceeds the preset number threshold value, determining that the joint robot is in a motor excessive loss state currently; otherwise, determining that the joint robot is not in the motor excessive loss state currently.
Further, in step S4, according to the judgment result of the joint robot, the adjusting the working state of the corresponding joint motor specifically includes:
and when the joint robot is determined to be in the motor excessive loss state at present, carrying out restarting treatment or bearing lubrication treatment on the joint motor with the average power loss value being greater than or equal to the preset power loss threshold value.
Compared with the prior art, the joint robot health state assessment method based on the digital twin technology is characterized in that a sensor is arranged in each joint motor of the joint robot to detect the action data of the joint motor, the action data are analyzed and processed, and the motor loss data of the joint robot are determined; analyzing and processing the motor loss data to obtain the current motor loss state information of the joint robot; judging whether the joint robot is in a motor excessive loss state at present or not according to the motor loss state information; according to the method, by means of a digital twinning technology, the running state of the joint robot is visually displayed, a corresponding digital twinning model is constructed, historical action data of a joint motor of the joint robot are mined and analyzed, health state assessment of the joint robot is achieved, waste of overhaul resources is reduced, and unplanned shutdown events of the joint robot are effectively avoided.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow diagram of a joint robot health state assessment method based on a digital twinning technique according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a method for assessing a health state of a joint robot based on a digital twinning technique according to an embodiment of the present invention. The joint robot health state assessment method based on the digital twinning technology comprises the following steps:
step S1, instructing a joint robot to move, and collecting a motion image of the joint robot; analyzing and processing the motion image, and determining whether the current actual motion posture of the joint robot is matched with the expected motion posture;
s2, when the actual action posture is not matched with the expected action posture, indicating a motion sensor in the joint robot to acquire motion data of the joint robot in the action process; analyzing and processing the motion data to determine motor loss data of the joint robot;
s3, analyzing and processing the motor loss data to obtain the current motor loss state information of the joint robot; judging whether the joint robot is in a motor excessive loss state at present according to the motor loss state information;
and S4, adjusting the working state of the corresponding joint motor according to the judgment result of the joint robot.
The beneficial effects of the above technical scheme are: according to the joint robot health state evaluation method based on the digital twin technology, a sensor is arranged in each joint motor of a joint robot to detect motion data of the joint motor, the motion data is analyzed and processed, and motor loss data of the joint robot is determined; analyzing and processing the motor loss data to obtain the current motor loss state information of the joint robot; judging whether the joint robot is in a motor excessive loss state at present according to the motor loss state information; according to the method, by means of a digital twinning technology, the running state of the joint robot is visually displayed, a corresponding digital twinning model is constructed, historical action data of a joint motor of the joint robot are mined and analyzed, health state assessment of the joint robot is achieved, waste of overhaul resources is reduced, and unplanned shutdown events of the joint robot are effectively avoided.
Preferably, in step S1, the instructing the joint robot to move and acquiring the motion image of the joint robot specifically includes:
sending an action instruction to a joint motor of the joint robot to drive the corresponding joint motor to operate, so that the joint robot acts; and simultaneously shooting the motion process of the joint robot so as to obtain a motion image of the joint robot.
The beneficial effects of the above technical scheme are: through the mode, the action command can be sent to each joint motor of the joint robot independently, each joint motor is driven to operate independently, and the overall action flexibility of the joint robot is improved. In addition, the motion process of the joint robot is dynamically shot to obtain a corresponding motion image, so that whether the joint robot accurately executes a corresponding motion instruction or not can be conveniently and accurately analyzed subsequently.
Preferably, in step S1, the analyzing the motion image and determining whether the current actual motion posture of the joint robot matches the expected motion posture specifically includes:
sequentially extracting a plurality of action picture frames from the action image, and identifying and obtaining the current action amplitude and action direction of a manipulator of the joint robot from each action picture frame to be used as an actual action posture;
comparing the action amplitude with the action amplitude corresponding to the expected action posture, and determining an action amplitude deviation value between the action amplitude and the expected action posture;
comparing the action direction with an action direction corresponding to the expected action posture, and determining an action direction deviation angle value between the action direction and the action direction;
if the action amplitude deviation value is greater than or equal to a preset amplitude deviation threshold value, or the action direction deviation angle value is greater than or equal to a preset deviation angle threshold value, determining that the actual action attitude is not matched with the expected action attitude; otherwise, a match between the actual motion pose and the desired motion pose is determined.
The beneficial effects of the above technical scheme are: through the mode, the image frame recognition processing is carried out on the motion image of the joint robot, whether the current actual motion posture of the joint robot is matched with the expected motion posture corresponding to the motion command received by the joint robot or not is judged from the two aspects of motion amplitude and motion direction, and therefore the current motion state of the joint robot is accurately judged in a quantitative mode.
Preferably, in the step S2, when the actual motion posture is not matched with the expected motion posture, the motion sensor inside the joint robot is instructed to collect motion data of the joint robot in the motion process; analyzing and processing the motion data, and determining the motor loss data of the joint robot specifically comprises the following steps:
each joint motor of the joint robot is provided with a torque sensor, and when the actual action attitude is not matched with the expected action attitude, each torque sensor is indicated to acquire the rotation angle of each joint motor relative to a preset initial position in the process from the start of the joint robot to the current moment by using a preset sampling frequency value as the action data;
and analyzing and processing the motion data to determine the power loss of each joint motor of the joint robot.
The beneficial effects of the above technical scheme are: through the mode, when the current actual motion posture of the joint robot is not matched with the expected motion posture, torque detection is further carried out on each joint motor of the joint robot, so that motion data of each joint motor is subjected to detailed analysis, the loss power of each joint motor is determined, and comprehensive detailed analysis of all joint motors of the joint robot is realized.
Preferably, in step S2, the analyzing the motion data and determining the power loss of each joint motor of the joint robot specifically includes:
obtaining the number of the joint motor which is in a working state simultaneously in the process from the start of the joint robot to the current moment according to a preset acquisition frequency value and the rotation angle of each joint motor relative to a preset initial position acquired by each torque sensor at each sampling moment by using the following formula (1),
Figure GDA0003863379920000091
in the above formula (1), E [ (t) 0 +k×T),i]Indicates the t-th time from the start of the operation of the articulated robot to the current time 0 At + kXT moment, the working state value of the ith joint motor of the joint robot; t is t 0 Indicating the time corresponding to the start of the joint robot; t represents the sampling period of the torque sensor, which is the reciprocal of the predetermined acquisition frequency value; k represents an integer variable having a value range of
Figure GDA0003863379920000092
t represents the current time;
Figure GDA0003863379920000093
represents a rounding down operation; theta [ (t) 0 +k×T),i]Indicates the t-th time in the process from the start of the operation of the joint robot to the current time 0 At + kXT moment, the angle value corresponding to the preset initial position of the ith joint motor of the joint robot; theta { [ t ] 0 +(k-1)×T]I represents the t-th time from the start of the operation of the articulated robot to the present time 0 At the moment of +/-k-1) multiplied by T, the rotation angle value of the ith joint motor of the joint robot relative to the preset initial position;
if E [ (t) 0 +k×T),i]=1, it indicates the t-th time point in the process from the start of the operation of the joint robot to the current time point 0 At + kXT moment, the ith joint motor of the joint robot is in a working state;
if E [ (t) 0 +k×T),i]=0, it indicates the t-th time point in the process from the start of the operation of the joint robot to the current time point 0 Time + kXTThe ith joint motor of the joint robot is in a stop state;
then, the following formula (2) is utilized, according to the serial number of the joint motor in the working state, the total power of the joint robot acquired at each sampling moment and the torque value of the corresponding joint motor acquired by each torque sensor, the power loss value of each joint motor of the joint robot during the process from the start of the joint robot to the current moment is obtained,
Figure GDA0003863379920000101
in the above formula (2), p [ (t) 0 +k×T),i]Indicates the t-th time from the start of the operation of the articulated robot to the current time 0 At the moment of + kxT, the loss power value of the ith joint motor of the joint robot; r is General assembly The total internal resistance value of the joint robot is represented; u represents the working voltage value of the joint robot; p (t) 0 + kxT) represents the T-th time period from the start of the operation of the joint robot to the current time 0 At + kxT, the total power value of the joint robot; g [ (t) 0 +k×T),i]Indicates the t-th time in the process from the start of the operation of the joint robot to the current time 0 At the moment + kxT, the torque value of the ith joint motor of the joint robot; n represents the total number of joint motors included in the joint robot.
The beneficial effects of the above technical scheme are: obtaining the joint motor number of the robot which works at each sampling moment in the current time period from the beginning of use by using the formula (1) according to the sampling frequency value and the rotation angle of each joint motor relative to the initial position of the motor, which is obtained by sampling at each sampling moment, so that the working condition of each joint motor at the same moment is known, and the subsequent subdivision of power consumption is facilitated; and then estimating the loss power of each joint motor of the robot from the beginning to the current time period at each sampling time according to the serial number of the joint motor, the total power of the robot sampled at each sampling time and the torque of each joint motor, which is acquired by a torque sensor arranged on each joint motor, of the robot from the beginning to the current time period, by using the formula (2), so that the total power consumption is subdivided on each joint motor, the service life and the service condition of each joint motor are conveniently refined and analyzed, and a large amount of historical data accumulated by equipment is mined and analyzed, so that favorable data is provided.
Preferably, in step S3, analyzing the motor loss data to obtain current motor loss state information of the joint robot specifically includes:
obtaining a loss line graph of each joint motor of the joint robot according to the loss power value of each joint motor by using the following formula (3),
Figure GDA0003863379920000102
Figure GDA0003863379920000111
in the above formula (3), h [ (t) 0 +k×T),i]T is on the time axis of the abscissa in the loss curve diagram of the ith joint motor of the joint robot 0 If the denominator is zero in the calculation process, then directly let h [ (T) 0 +k×T),i]= H; h represents the maximum display height of the loss profile;
Figure GDA0003863379920000112
the maximum value of the loss power values corresponding to the ith joint motor at each sampling moment in the process from the start of the joint robot to the current moment is represented;
Figure GDA0003863379920000113
and the minimum value of the loss power values of the ith joint motor at each sampling moment in the process from the start of the joint robot to the current moment is shown.
The beneficial effects of the above technical scheme are: the loss line graph of each joint motor of the robot is obtained according to the loss power of each joint motor of the robot from the beginning to each sampling moment in the current time period by using the formula (3), the line graph with the uniform display height is favorable for observing and comparing the use condition of each joint motor, and the display with the maximum display height can also increase the watching comfort degree of a watcher.
Preferably, in step S3, determining whether the joint robot is currently in the motor excessive loss state according to the motor loss state information specifically includes:
determining an average power loss value of each joint motor in the process from the start of the joint robot to the current moment according to the loss line graph of each joint motor;
if the number of the joint motors with the average power loss value larger than or equal to the preset power loss threshold value in the joint robot exceeds the preset number threshold value, determining that the joint robot is in a motor excessive loss state currently; otherwise, determining that the joint robot is not in the motor excessive loss state currently.
The beneficial effects of the above technical scheme are: by the mode, the loss line graph of each joint motor is taken as a reference, the power loss conditions of all joint motors of the joint robot are integrated, and therefore whether the joint robot is in a motor excessive loss state or not is judged.
Preferably, in step S4, the adjusting the operating state of the corresponding joint motor according to the judgment result of the joint robot specifically includes:
and when the joint robot is determined to be in the motor excessive loss state at present, carrying out restarting treatment or bearing lubrication treatment on the joint motor with the average power loss value being greater than or equal to the preset power loss threshold value.
The beneficial effects of the above technical scheme are: by the mode, the joint robot can be maintained in time, and the problem that the joint robot is not reversible is effectively solved.
As can be seen from the content of the above embodiment, in the joint robot health state assessment method based on the digital twin technology, a sensor is provided in each joint motor of the joint robot to detect motion data of the joint motor, the motion data is analyzed and processed, and motor loss data of the joint robot is determined; analyzing and processing the motor loss data to obtain the current motor loss state information of the joint robot; judging whether the joint robot is in a motor excessive loss state at present according to the motor loss state information; according to the method, by means of a digital twinning technology, the running state of the joint robot is visually displayed, a corresponding digital twinning model is constructed, historical action data of a joint motor of the joint robot are mined and analyzed, health state assessment of the joint robot is achieved, waste of overhaul resources is reduced, and unplanned shutdown events of the joint robot are effectively avoided.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (4)

1. A joint robot health state assessment method based on a digital twinning technology is characterized by comprising the following steps:
step S1, instructing a joint robot to move, and collecting a motion image of the joint robot; analyzing and processing the motion image, and determining whether the current actual motion posture of the joint robot is matched with the expected motion posture;
s2, when the actual action posture is not matched with the expected action posture, indicating a motion sensor in the joint robot to acquire motion data of the joint robot in the action process; analyzing and processing the motion data to determine motor loss data of the joint robot;
s3, analyzing and processing the motor loss data to obtain the current motor loss state information of the joint robot; judging whether the joint robot is in a motor excessive loss state at present according to the motor loss state information;
s4, adjusting the working state of the corresponding joint motor according to the judgment result of the joint robot;
in the step S2, when the actual motion posture is not matched with the expected motion posture, instructing a motion sensor inside the joint robot to acquire motion data of the joint robot in a motion process; analyzing and processing the motion data, and determining the motor loss data of the joint robot specifically comprises the following steps:
each joint motor of the joint robot is provided with a torque sensor, and when the actual action posture is not matched with the expected action posture, each torque sensor is indicated to collect the rotation angle of each joint motor relative to a preset initial position in the process from the start of the joint robot to the current moment by a preset sampling frequency value to serve as the action data;
analyzing and processing the motion data to determine the power loss of each joint motor of the joint robot;
obtaining the number of the joint motor which is in the working state simultaneously in the process from the beginning of the joint robot to the current moment according to the preset collection frequency value and the rotation angle of each joint motor relative to the preset initial position collected by each torque sensor at each sampling moment by using the following formula (1),
Figure FDA0003863379910000021
in the above formula (1), E [ (t) 0 +k×T),i]Indicates the t-th time in the process from the start of the operation of the joint robot to the current time 0 At + kxT moment, the working state value of the ith joint motor of the joint robot; t is t 0 Indicating the time corresponding to the start of the joint robot; t represents the sampling period of the torque sensor, which is the reciprocal of the predetermined acquisition frequency value; k represents an integer variable having a value range of
Figure FDA0003863379910000022
t represents the current time;
Figure FDA0003863379910000023
represents a rounding down operation; theta [ (t) 0 +k×T),i]Indicates the t-th time from the start of the operation of the articulated robot to the current time 0 At the moment of + kxT, an angle value corresponding to the preset initial position of the ith joint motor of the joint robot; theta { [ t ] 0 +(k-1)×T]I represents the t-th time from the start of the operation of the articulated robot to the present time 0 At the moment of +/-k-1) multiplied by T, the rotation angle value of the ith joint motor of the joint robot relative to the preset initial position;
if E [ (t) 0 +k×T),i]=1, it indicates the t-th time in the process from the start of the operation of the articulated robot to the current time 0 At + kXT moment, the ith joint motor of the joint robot is in a working state;
if E [ (t) 0 +k×T),i]=0, it indicates the t-th time in the process from the start of the operation of the articulated robot to the current time 0 At + kXT moment, the ith joint motor of the joint robot is in a stop state;
then, the following formula (2) is utilized, according to the serial number of the joint motor in the working state, the total power of the joint robot acquired at each sampling moment and the torque value of the corresponding joint motor acquired by each torque sensor, the loss power value of each joint motor of the joint robot is obtained in the process from the start of the joint robot to the current moment,
Figure FDA0003863379910000024
in the above formula (2), p [ (t) 0 +k×T),i]Indicates the t-th time in the process from the start of the operation of the joint robot to the current time 0 At the moment of + kxT, the loss power value of the ith joint motor of the joint robot; r is General (1) The total internal resistance value of the joint robot is represented; u represents the working voltage value of the joint robot; p (t) 0 + kxT) represents the T-th time period from the start of the operation of the joint robot to the current time 0 At + kXT, the total power value of the joint robot; g [ (t) 0 +k×T),i]Indicates the t-th time from the start of the operation of the articulated robot to the current time 0 At the moment + kxT, the torque value of the ith joint motor of the joint robot; n represents the total number of joint motors included in the joint robot;
in the step S3, analyzing and processing the motor loss data to obtain current motor loss state information of the joint robot; and then according to the motor loss state information, judging whether the joint robot is currently in a motor excessive loss state, and specifically comprising the following steps:
obtaining a loss line graph of each joint motor of the joint robot according to the loss power value of each joint motor by using the following formula (3),
Figure FDA0003863379910000031
in the above formula (3), h [ (t) 0 +k×T),i]T is on the time axis of the abscissa in the loss curve diagram of the ith joint motor of the joint robot 0 If the denominator of the height value of the polyline point corresponding to the + kXT moment is zero in the calculation process, directly making h [ (T) 0 +k×T),i]= H; h represents the maximum display height of the loss profile;
Figure FDA0003863379910000032
the maximum value of the loss power values corresponding to the ith joint motor at each sampling moment in the process from the start of the joint robot to the current moment is represented;
Figure FDA0003863379910000033
shows that the ith joint motor corresponds to each sampling moment in the process from the start of the joint robot to the current momentA minimum value of the loss power values;
determining an average power loss value of each joint motor in the process from the start of the joint robot to the current moment according to the loss line graph of each joint motor;
if the number of the joint motors with the average power loss value larger than or equal to the preset power loss threshold value in the joint robot exceeds the preset number threshold value, determining that the joint robot is in a motor excessive loss state currently; otherwise, determining that the joint robot is not in the motor excessive loss state currently.
2. The joint robot health status assessment method based on digital twinning technique as claimed in claim 1, characterized in that:
in step S1, instructing the joint robot to move, and acquiring a motion image of the joint robot specifically includes:
sending an action instruction to a joint motor of the joint robot to drive the corresponding joint motor to operate, so that the joint robot acts; and meanwhile, shooting the motion process of the joint robot so as to obtain a motion image of the joint robot.
3. The joint robot health state evaluation method based on the digital twin technique according to claim 2, characterized in that:
in step S1, analyzing the motion image to determine whether the current actual motion posture of the joint robot matches the expected motion posture specifically includes:
sequentially extracting a plurality of action picture frames from the action image, and identifying and obtaining the current action amplitude and action direction of a manipulator of the joint robot from each action picture frame to be used as an actual action posture;
comparing the action amplitude with an action amplitude corresponding to the expected action posture, and determining an action amplitude deviation value between the action amplitude and the expected action posture;
comparing the action direction with an action direction corresponding to the expected action posture, and determining an action direction deviation angle value between the action direction and the action direction;
if the action amplitude deviation value is greater than or equal to a preset amplitude deviation threshold value, or the action direction deviation angle value is greater than or equal to a preset deviation angle threshold value, determining that the actual action attitude is not matched with the expected action attitude; otherwise, determining that the actual motion gesture matches the expected motion gesture.
4. The joint robot health status assessment method based on digital twinning technique as claimed in claim 1, characterized in that:
in step S4, the adjusting the working state of the corresponding joint motor according to the judgment result of the joint robot specifically includes:
and when the joint robot is determined to be in the motor excessive loss state at present, carrying out restarting treatment or bearing lubrication treatment on the joint motor of which the average power loss value is greater than or equal to a preset power loss threshold value.
CN202210686455.8A 2022-06-16 2022-06-16 Joint robot health state assessment method based on digital twinning technology Active CN114918976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210686455.8A CN114918976B (en) 2022-06-16 2022-06-16 Joint robot health state assessment method based on digital twinning technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210686455.8A CN114918976B (en) 2022-06-16 2022-06-16 Joint robot health state assessment method based on digital twinning technology

Publications (2)

Publication Number Publication Date
CN114918976A CN114918976A (en) 2022-08-19
CN114918976B true CN114918976B (en) 2022-12-02

Family

ID=82814710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210686455.8A Active CN114918976B (en) 2022-06-16 2022-06-16 Joint robot health state assessment method based on digital twinning technology

Country Status (1)

Country Link
CN (1) CN114918976B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109719730A (en) * 2019-01-25 2019-05-07 温州大学 A kind of twin robot of number of breaker flexibility assembling process

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108107841B (en) * 2017-12-26 2020-12-18 山东大学 Numerical twin modeling method of numerical control machine tool
CN110600132B (en) * 2019-08-31 2023-12-15 深圳市广宁股份有限公司 Digital twin intelligent health prediction method and device based on vibration detection
CN115699050A (en) * 2019-11-05 2023-02-03 强力价值链网络投资组合2019有限公司 Value chain network control tower and enterprise management platform
CN110861123A (en) * 2019-11-14 2020-03-06 华南智能机器人创新研究院 Method and device for visually monitoring and evaluating running state of robot
US20210150679A1 (en) * 2019-11-18 2021-05-20 Immervision, Inc. Using imager with on-purpose controlled distortion for inference or training of an artificial intelligence neural network
CN111077867A (en) * 2019-12-25 2020-04-28 北京航空航天大学 Method and device for dynamically simulating assembly quality of aircraft engine based on digital twinning
CN111230887B (en) * 2020-03-10 2021-01-26 合肥学院 Industrial gluing robot running state monitoring method based on digital twin technology
CN111695734A (en) * 2020-06-12 2020-09-22 中国科学院重庆绿色智能技术研究院 Multi-process planning comprehensive evaluation system and method based on digital twin and deep learning
CN113255220B (en) * 2021-05-31 2022-12-06 西安交通大学 Gear pump maintenance method based on digital twinning
CN113485295A (en) * 2021-07-07 2021-10-08 西北工业大学 Four-legged robot fault prediction method, device and equipment based on digital twin
AU2021105076A4 (en) * 2021-08-06 2022-04-14 Monika Bhatt Machine learning based Digital Twin Architecture Model and Communication Interfaces for Cloud based Cyber Physical Systems for Industry 4.0
CN114418042B (en) * 2021-12-30 2022-07-22 智昌科技集团股份有限公司 Industrial robot operation trend diagnosis method based on cluster analysis

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109719730A (en) * 2019-01-25 2019-05-07 温州大学 A kind of twin robot of number of breaker flexibility assembling process

Also Published As

Publication number Publication date
CN114918976A (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN111947928B (en) Multi-source information fusion bearing fault prediction system and method
CN107614217B (en) Fault diagnosis device and fault diagnosis method
CN107775639B (en) Robot anti-collision method and system based on current method
CN108446864B (en) Big data analysis-based fault early warning system and method for rail transit equipment
EP1676180B1 (en) Diagnostic method for predicting maintenance requirements in rotating equipment
CN111353482A (en) LSTM-based fatigue factor recessive anomaly detection and fault diagnosis method
CN112179691B (en) Mechanical equipment running state abnormity detection system and method based on counterstudy strategy
CN110836696A (en) Remote fault prediction method and system suitable for phase modulator system
CN113899538B (en) Bolt tightening monitoring method and system
CN115559890A (en) Water pump unit operation fault prediction and adjustment method and system
CN116308300B (en) Power equipment state monitoring evaluation and command method and system
CN114918976B (en) Joint robot health state assessment method based on digital twinning technology
CN112576492A (en) Intelligent diagnosis method for electric submersible pump production well fault
CN115034137A (en) RVM and degradation model-based two-stage hybrid prediction method for residual life of bearing
CN117073129B (en) Fault diagnosis method and system for heating ventilation air conditioning system
CN116488578A (en) Photovoltaic equipment analysis method based on AI visual assistance
NO347531B1 (en) A method for using an anode rod's equidistant voltage drop to predict anode power
CN112731131A (en) Fault diagnosis method and device for electric direct-current isolating switch
CN112621381B (en) Intelligent health state evaluation method and device for machine tool feeding system
US11454243B2 (en) Artificial intelligence training method for a target model based on the label matrix
CN113379210A (en) Motor fault detection method and device, heading machine and readable storage medium
CN113021411B (en) Robot failure prediction device and system, and robot failure prediction method
CN114722862A (en) Beam-pumping unit motor fault prediction device and method
CN112861957A (en) Method and device for detecting running state of oil well
KR100931630B1 (en) Controllable abnormality diagnosis device in finishing rolling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant