CN117067261A - Robot monitoring method, device, equipment and storage medium - Google Patents

Robot monitoring method, device, equipment and storage medium Download PDF

Info

Publication number
CN117067261A
CN117067261A CN202311286888.5A CN202311286888A CN117067261A CN 117067261 A CN117067261 A CN 117067261A CN 202311286888 A CN202311286888 A CN 202311286888A CN 117067261 A CN117067261 A CN 117067261A
Authority
CN
China
Prior art keywords
pose
robot
fusion
current
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311286888.5A
Other languages
Chinese (zh)
Inventor
赖嘉骏
张圆
吴嘉嘉
李华清
胡金水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202311286888.5A priority Critical patent/CN117067261A/en
Publication of CN117067261A publication Critical patent/CN117067261A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/06Safety devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

In order to solve the problem that a single algorithm is easy to be interfered by factors such as environment and the like to cause errors of monitoring information, the application adopts at least two pose detection algorithms to respectively obtain the current pose detection result of the robot, wherein the pose detection algorithms can comprise a pose detection algorithm based on a sensor of the robot and a pose detection algorithm based on an external sensor, and the two algorithms respectively carry out pose detection based on data acquired by different sensors and can form complementary advantages. And further fusing the results of the at least two pose detection algorithms to obtain a fused pose, verifying the credibility of the fused pose based on the pose detection results of the pose detection algorithms, and storing the fused pose as the current effective pose of the robot when the credibility of the fused pose is determined. The effective pose obtained by the application is more accurate and has higher reliability.

Description

Robot monitoring method, device, equipment and storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a method, apparatus, device, and storage medium for monitoring a robot.
Background
With the development of technology, robots are increasingly used in fields of industry, medical treatment, families and the like, and particularly in industries of logistics, distribution and the like, mobile robots have become an important transportation tool. However, due to the complexity and uncertainty of the mobile robot, various problems may be encountered in the practical application process, such as inaccurate positioning, unreasonable path planning, abnormal running state of the robot, and the like. Therefore, developing a monitoring system capable of monitoring the state of a mobile robot in real time has important practical significance.
Robot monitoring may include monitoring the pose, operational status, etc. of the robot. The pose is the basis for robot navigation and other tasks, so that the pose monitoring of the robot is particularly important. The prior art provides a plurality of monitoring schemes based on sensors and algorithms, such as SLAM algorithm (Simultaneous Localization And Mapping) which is more common, and can position pose information of a robot.
Although the method and the technology solve the problem of mobile robot state monitoring to a certain extent, certain limitations still exist, such as that a single algorithm is easily influenced by factors such as complex environment, the accuracy of monitoring information cannot be ensured, and the reliability is low.
Disclosure of Invention
In view of the above problems, the present application provides a method, an apparatus, a device, and a storage medium for robot monitoring, so as to provide a robot monitoring scheme with high reliability, and improve accuracy of monitoring information. The specific scheme is as follows:
in a first aspect, a robot monitoring method is provided, including:
at least two pose detection algorithms are adopted to respectively obtain the current pose detection result of the robot, wherein the at least two pose detection algorithms comprise a pose detection algorithm based on a sensor of the robot and a pose detection algorithm based on an external sensor;
fusing the pose detection results of the at least two pose detection algorithms to obtain fused poses, and verifying whether the fused poses are credible or not based on the pose detection results of the pose detection algorithms;
and when the fusion pose is credible, storing the fusion pose as the current effective pose of the robot.
Preferably, the at least two pose detection algorithms include:
synchronous positioning and map construction SLAM algorithm, visual positioning detection algorithm based on special mark.
Preferably, the visual positioning detection algorithm based on special marks comprises: pose detection algorithm based on ArUco mark.
Preferably, the at least two pose detection algorithms further comprise:
the pose detection algorithm based on the deep neural network model comprises the following training samples in training of the deep neural network model: training a picture, a target pose and a historical pose of the robot in the current frame, wherein a sample label is the pose of the robot at the current moment;
the process of obtaining the current pose detection result of the robot by adopting a pose detection algorithm based on a deep neural network model comprises the following steps:
acquiring a current frame picture, a target pose corresponding to the current frame and a historical pose of the robot to form input state data;
and sending the input state data into the trained deep neural network model to obtain a current pose detection result of the robot, which is output by the model.
Preferably, at least two pose detection algorithms are adopted to respectively obtain current pose detection results of the robot, the pose detection results of the at least two pose detection algorithms are fused to obtain a fused pose, and the process of verifying whether the fused pose is credible or not based on the pose detection results of the pose detection algorithms comprises the following steps:
obtaining a current first pose of the robot by adopting the SLAM algorithm, and obtaining a current second pose of the robot by adopting a visual positioning detection algorithm based on special marks;
Performing pose fusion voting on the first pose and the second pose to obtain a fusion pose;
performing first credibility judgment on the fusion pose according to the respective difference between the fusion pose and the first pose and the second pose;
if the first credibility judgment determines that the fusion pose is credible, finally determining that the fusion pose is credible; if the first credibility judgment determines that the fusion pose is not credible, a pose detection algorithm based on the neural network model is adopted to obtain the current third pose and the confidence of the robot;
and comparing the difference between the fusion pose information and the third pose, if the difference is in a set difference range and the confidence degree is not lower than a set confidence degree threshold, finally determining that the fusion pose is credible, otherwise, finally determining that the fusion pose is not credible.
Preferably, before saving the fusion pose as the current valid pose of the robot, the method further comprises:
when the fusion pose is credible, determining a reasonable range of the current pose according to the planned target pose and the historical motion state of the robot;
judging whether the fusion pose is in the reasonable range of the current pose, if so, determining that the fusion pose is reasonable, executing the step of storing the fusion pose as the current effective pose of the robot, and if not, determining that the fusion pose is unreasonable, and discarding the current frame.
Preferably, performing pose fusion voting on the first pose and the second pose to obtain a fused pose, including:
setting covariance matrixes of the first pose and the second pose;
and calculating a filtering fusion result of the first pose and the second pose through Kalman filtering and the covariance matrix to obtain a fusion pose.
Preferably, the method further comprises:
acquiring an actual running state track of the robot, wherein the running state track comprises a time stamp and running state information corresponding to each time stamp;
and comparing the difference between the actual running state track and the planned theoretical running state track, and outputting prompt information if the difference exceeds the set tolerance.
Preferably, the operation state information includes any one or more of the following:
internal state class, action state class, event state class.
Preferably, the method further comprises:
determining an abnormal target state according to the difference between the actual running state track and the planned theoretical running state track;
according to the association relation between the preset states, determining each related state associated with the target state and the subordinate relation between the related states;
Traversing each related state in turn according to the subordinate relations, calling attribute information of attribute items of the related states, and analyzing reliability of the attribute items based on the attribute information;
screening target attribute items which do not meet reliability conditions, and acquiring abnormal labels corresponding to the target attribute items;
and forming an abnormality reason set of the target state by each abnormality label.
Preferably, the method further comprises:
storing the obtained effective pose of each time stamp of the robot in a set time before the current time into a real-time pose queue, and storing the planned target pose of each time stamp of the robot into a rule-controlled pose queue;
aligning the real-time pose queue and the gauge pose queue according to time stamps, and calculating pose differences of the time stamps;
and judging whether the current pose of the robot needs to be corrected based on the pose difference, and if so, sending a correction instruction to the robot.
In a second aspect, a robot monitoring device is provided, comprising:
the pose calculating unit is used for respectively obtaining current pose detection results of the robot by adopting at least two pose detection algorithms, wherein the at least two pose detection algorithms comprise a pose detection algorithm based on a sensor of the robot and a pose detection algorithm based on an external sensor;
The pose fusion unit is used for fusing pose detection results of the at least two pose detection algorithms to obtain fusion poses, and verifying whether the fusion poses are credible or not based on the pose detection results of the pose detection algorithms;
and the effective pose storage unit is used for storing the fusion pose as the current effective pose of the robot when the fusion pose is credible.
In a third aspect, a robot monitoring device is provided, comprising: a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the steps of the robot monitoring method as described above.
In a fourth aspect, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the robot monitoring method as described above.
By means of the technical scheme, in order to avoid the problem that a single algorithm is easy to be interfered by factors such as environment and the like to cause errors of monitoring information, at least two pose detection algorithms are adopted to respectively obtain current pose detection results of the robot, wherein the pose detection algorithms can comprise pose detection algorithms based on sensors of the robot and pose detection algorithms based on external sensors, and the two algorithms respectively carry out pose detection based on data acquired by different sensors, so that complementary advantages can be formed. Based on the above, the pose detection results of the at least two pose detection algorithms are fused to obtain a fused pose, and further, the fused pose can be cross-verified based on the pose detection results of the pose detection algorithms to verify the credibility of the fused pose, and the fused pose is stored as the current effective pose of the robot when the credibility of the fused pose is determined. Obviously, the application fuses the pose detection results of a plurality of different pose detection algorithms to obtain the fused pose, verifies the reliability of the fused pose, can integrate the advantages of the different pose detection algorithms, compensates the defect that a single algorithm is influenced by factors such as environment and the like and has errors, and finally obtains more accurate effective pose and higher reliability.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a schematic flow chart of a robot monitoring method according to an embodiment of the present application;
FIG. 2 illustrates an ArUco marker;
FIG. 3 illustrates a flow chart of a robot monitoring method employing three pose detection algorithms;
FIG. 4 illustrates another robot monitoring method flow diagram employing three pose detection algorithms;
FIG. 5 illustrates a schematic view of various poses of a robot advancement process;
FIG. 6 illustrates a flow chart of a method of monitoring a movement trajectory of a robot;
fig. 7 is a schematic structural diagram of a robot monitoring device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a robot monitoring device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The application provides a robot monitoring scheme which can be suitable for monitoring the running state of a movable robot, wherein the running state can comprise the pose, the sensor state, the acceleration and deceleration state and the like of the robot. By monitoring the state of the robot, the working efficiency and the safety of the robot can be effectively improved, and powerful support is provided for the wide application of the robot in various fields.
The scheme of the application can be realized based on the terminal with data processing capability, and the terminal can be a robot, namely, the real-time state monitoring scheme of a processor carried by the robot can be realized. In addition, the terminal may be other devices that communicate with the robot, such as a base station or other terminal.
Next, as described in connection with fig. 1, the robot monitoring method of the present application may include the steps of:
and step 100, adopting at least two pose detection algorithms to respectively obtain the current pose detection results of the robot.
The at least two pose detection algorithms may include a pose detection algorithm based on a sensor of the robot itself and a pose detection algorithm based on an external sensor. The data relied by different pose detection algorithms may be different, so that corresponding data can be obtained according to the requirements of each pose detection algorithm, and then the pose detection algorithm is called to determine the current pose detection result of the robot.
By adopting at least two different pose detection algorithms and respectively carrying out pose detection based on data acquired by different sensors, complementary advantages can be formed, and the problem of inaccurate pose detection caused by the influence of factors such as environment and the like on a single algorithm is avoided.
In this embodiment, the pose detection results of the robot may be determined in real time, that is, the pose detection results of the robot at different times may be sequentially determined according to the time sequence.
And step S110, fusing the pose detection results of the at least two pose detection algorithms to obtain a fused pose, and verifying whether the fused pose is credible.
Specifically, a plurality of different fusion schemes can be adopted, and the pose detection results of the pose detection algorithms obtained in the previous step are fused, so that fusion poses are obtained, and compared with the pose detection results obtained by a single algorithm, the fusion poses are higher in accuracy.
Further, in order to improve the reliability of the scheme, the credibility of the fusion pose can be further verified in the step. Specifically, whether the fusion pose is credible or not can be verified based on pose detection results of all pose detection algorithms. According to the different pose detection algorithms adopted by the application, corresponding fusion schemes and credibility verification schemes can also be different, and the following embodiments are used for developing and introducing the fusion schemes and credibility verification schemes.
And step 120, when the fusion pose is credible, storing the fusion pose as the current effective pose of the robot.
Specifically, if the fusion pose is determined to be reliable in the previous step, the fusion pose can be stored as the current effective pose of the robot. If the fusion pose is not reliable in the last step, the data of the current frame can be discarded, and the last saved effective pose of the history is taken as the effective pose of the robot at the current moment.
In order to avoid the problem that a single algorithm is easy to be interfered by factors such as environment and the like to cause errors of monitoring information, the robot monitoring method provided by the embodiment of the application adopts at least two pose detection algorithms to respectively obtain the current pose detection result of the robot, wherein the pose detection algorithms can comprise a pose detection algorithm based on a sensor of the robot and a pose detection algorithm based on an external sensor, and the two algorithms respectively carry out pose detection based on data acquired by different sensors and can form complementary advantages. Based on the above, the pose detection results of the at least two pose detection algorithms are fused to obtain a fused pose, and further, the fused pose can be cross-verified based on the pose detection results of the pose detection algorithms to verify the credibility of the fused pose, and the fused pose is stored as the current effective pose of the robot when the credibility of the fused pose is determined. Obviously, the application fuses the pose detection results of a plurality of different pose detection algorithms to obtain the fused pose, verifies the reliability of the fused pose, can integrate the advantages of the different pose detection algorithms, compensates the defect that a single algorithm is influenced by factors such as environment and the like and has errors, and finally obtains more accurate effective pose and higher reliability.
In some embodiments of the present application, the different pose detection algorithms introduced in the foregoing step S100 are exemplarily described.
There are various pose detection algorithms based on the sensors of the robot, such as SLAM algorithms, which are currently common. The SLAM technology is a technology for autonomously sensing, understanding and constructing environmental information in an unknown environment through a robot, and is widely applied to the fields of robot navigation, unmanned driving, augmented reality and the like. The realization of SLAM technology needs to comprehensively use various sensor data, such as laser radar, cameras, inertial measurement units and the like, and combine technical means of machine learning, optimization algorithm and the like for processing and optimization.
The SLAM algorithm mainly includes the following aspects:
1. sensor selection and data processing
The sensor is the core of SLAM technology, and the selection and data processing directly affect the accuracy and real-time performance of SLAM algorithm. Currently commonly used sensors include LiDAR (camera), inertial Measurement Units (IMU), and the like. The laser radar has the advantages of high precision, long-distance detection and the like, and is suitable for positioning and mapping of a large-scale robot; the camera has the advantages of low cost, easy integration and the like, and is suitable for positioning and mapping of a small robot or a mobile robot.
2. Feature extraction and matching
Feature extraction and matching are one of the key steps in SLAM technology, and the purpose of the feature extraction and matching is to extract meaningful features from sensor data and match the features acquired at different moments, so as to determine the position and the posture of the robot. Common feature extraction methods include color histograms, gabor filters, local Binary Patterns (LBP), etc.; common matching methods include feature point matching, feature vector matching, feature matching based on deep learning, and the like.
3. Pose estimation and optimization
The pose estimation refers to calculating the position and the pose of the robot according to the sensor data, and the optimization refers to optimizing the motion trail of the robot, so that the robot can complete tasks more quickly and efficiently. Common pose estimation methods include a least square method-based method, a Bayesian filter-based method, a particle filter-based method, and the like; common optimization methods include optimization algorithms based on gradient descent methods, optimization algorithms based on genetic algorithms, optimization algorithms based on ant colony algorithms, and the like.
4. Map construction
Map construction is an important link in SLAM technology, and the purpose of the map construction is to organize and manage environmental information collected by a robot in an unknown environment for subsequent use and analysis. Common map construction methods include grid map construction, topology map construction, laser radar cloud map construction and the like. The grid map construction is suitable for a simple scene, and has higher precision and stability; the topology map construction is suitable for complex scenes, and has high expandability and flexibility; the laser radar cloud image construction is suitable for large-scale scenes and has higher resolution and coverage.
In this embodiment, a simple algorithm flow of the SLAM algorithm is provided, which may specifically include:
s1, acquiring data of an accelerometer and a gyroscope from an IMU, and calculating the angular speed and the angular displacement of the robot in a current frame.
S2, calculating a rotation matrix R and a translation vector T of the robot in the current frame according to the angular speed and the angular displacement.
S3, converting the rotation matrix R and the translation vector T into quaternion representation so as to facilitate subsequent processing.
And S4, adding the rotation and translation vectors represented by the quaternion into a state vector of the robot, and representing the state information of the robot in the current frame.
S5, predicting the state vector of the current frame according to the state vector of the previous frame and the sensor data of the current frame by using a state transition equation.
S6, comparing the predicted state vector of the current frame with the state vector of the current frame which is actually observed to obtain a state error. And then estimating and correcting the state error by using a Kalman filter to obtain a final state vector.
S7, using the state vector as pose information of the robot, and using the state vector in a map construction and optimization process.
There are various pose detection algorithms based on external sensors, mainly a mobile robot positioning method based on a monitoring camera. According to the method, a certain three-dimensional characteristic point of the robot is subjected to projection transformation of three-dimensional vision through an external monitoring camera to obtain a corresponding three-dimensional coordinate point under a world coordinate system, so that the robot is positioned under the world coordinate system. The technical points involved therein are 2:
1. And detecting three-dimensional characteristic points.
The three-dimensional feature point detection means that the surface of an object is detected in a three-dimensional space through a computer vision technology, and feature points of the surface of the object are extracted. These feature points may be used to build a three-dimensional model of an object or for target tracking, recognition, etc. applications. Common three-dimensional feature point detection algorithms are: harris corner detection, SIFT feature detection, FAST feature detection, SURF feature detection, special marked corner detection, etc.
2. Three-dimensional projective transformation of feature points.
Because the camera images a three-dimensional object into a picture in a three-dimensional to two-dimensional physical process, the two-dimensional picture can be back-projected and converted into a three-dimensional space according to the principle of aperture imaging on the premise of specific information.
Based on the series connection of the two points, the two-dimensional coordinates of the key points representing the three-dimensional coordinates of the mobile robot under the picture coordinate system can be obtained through the external camera, and the three-dimensional coordinates under the world coordinate system can be obtained according to the three-dimensional projection transformation.
The pose detection algorithms based on the external sensor are multiple, and can be further divided into multiple different pose detection algorithms according to different algorithm processing logics.
For example, a visual localization detection algorithm based on special markers, such as an ArUco marker-based pose detection algorithm, may be employed. An ArUco marker is illustrated in FIG. 2. The gesture detection algorithm based on the arco mark is to determine the position and the direction of the camera by detecting the arco mark. ArUco marks are square marks consisting of a wide black border and an internal binary matrix that determines its identifier (id). The black border of the ArUco mark facilitates its rapid detection in the image, and the internal binary code is used to identify the mark and provide error detection and correction.
ArUco marks can be arranged on the robot, so that the image acquisition can be carried out on the robot through an externally arranged image sensor, and further, the pose information of the robot is determined through an image vision algorithm based on the acquired image carrying the special marks.
It should be further noted that, for the visual positioning detection algorithm based on the special mark, the special mark (such as the ArUco mark) can be optionally set in the robot working environment, so that the image carrying the ArUco mark can be collected by the image sensor of the robot, and the pose information of the robot can be determined by the image visual algorithm. At this time, the visual positioning detection algorithm based on the special mark can be classified into a pose detection algorithm type based on the self sensor.
In addition, the pose detection algorithm based on the external sensor can also adopt the pose detection algorithm based on the deep neural network model. The deep neural network model takes an environment image shot by an external image sensor as an input, and can predict pose information of the robot.
In the deep neural network model training stage, a current frame training picture of the robot in the environment, planned target pose and historical pose of the robot can be collected to be used as a training sample, the pose of the robot at the current moment is used as a sample label, training data are formed by the training sample and the sample label, and the deep neural network model is trained.
When the current pose detection result of the robot is obtained by adopting the deep neural network model, a current frame picture, a target pose corresponding to the current frame and a historical pose of the robot can be obtained to form input state data; and sending the input state data into the trained deep neural network model to obtain a current pose detection result of the robot, which is output by the model.
Based on the above-described pose detection algorithms, the at least two pose detection algorithms adopted in step S100 in the above-described embodiment of the present application may include a SLAM algorithm and a visual positioning detection algorithm based on a special mark. Further, the method can further comprise a pose detection algorithm based on a deep neural network model.
The three pose detection algorithms exemplified in the above embodiments are each of great length. The SLAM algorithm is realized based on a sensor on a robot body, so that the problem of a monitoring blind area does not exist, but the accuracy is not high, and for a complex dynamic scene, an incorrect pose result is generated, so that the monitoring result is unstable. The detection algorithm based on ArUco codes is a rapid visual detection algorithm, and has the advantages of low calculation force requirement and accurate detection. However, due to the fact that the detected target is small, the target is often lost due to the conditions of rotation, shielding and the like of the robot. The method based on the deep neural network model has better adaptability to shielding than the detection algorithm based on the ArUco code, but is strongly dependent on computational power, has larger burden on a processor, and has the accuracy which is often inferior to the result of the detection algorithm based on the ArUco code.
In summary, each algorithm has advantages and disadvantages, and the embodiment of the application can dynamically call each algorithm according to actual conditions, fuse output results, and finally output the real-time fusion pose of the current robot to the outside.
Referring to fig. 3, a flowchart of a robot monitoring method adopting the three pose detection algorithms is illustrated, which specifically may include:
And S1, performing pose estimation by adopting an SLAM algorithm to obtain the current first pose of the robot.
Specifically, data collected by each sensor of the robot, such as radar data collected by a laser radar, inertial data measured by an inertial sensor and the like, can be obtained, and a SLAM algorithm is further invoked to calculate the pose based on the sensor data, so that the current first pose of the robot is obtained.
And S2, carrying out pose estimation by adopting an ArUco mark-based algorithm to obtain the current second pose of the robot.
Specifically, a visual positioning detection algorithm based on special marks, such as an algorithm based on ArUco marks, and the like, can be adopted in the step. The current second position of the robot can be calculated by calling an algorithm based on the image acquired by the image sensor. The image sensor may be an external image sensor or an image sensor of the robot itself, depending on whether the special mark is provided on the robot or in the environment.
And S3, carrying out pose fusion voting on the first pose and the second pose to obtain a fusion pose.
In this embodiment, various fusion voting strategies may be adopted, such as averaging, weighting, and so on. Another pose fusion voting scheme based on kalman filtering is further provided in the embodiment.
Specifically, the covariance matrices of the first pose and the second pose may be set empirically first. Further, the first pose and the second pose are subjected to Kalman filtering and the covariance matrix to calculate a filtering fusion result, and the fusion pose is obtained.
Kalman filtering (Kalman filtering) is an algorithm that uses a linear system state equation to optimally estimate the state of the system by inputting and outputting observed data through the system. The optimal estimate can also be seen as a filtering process, since the observed data includes the effects of noise and interference in the system.
And S4, judging whether the fusion pose is credible for the first time, if so, executing the step S8, and if not, executing the step S5.
Specifically, according to the difference between the fusion pose and each of the first pose and the second pose, the first credibility judgment is performed on the fusion pose. If the difference between the fusion pose and the first pose is within the set difference range and the difference between the fusion pose and the second pose is also within the set difference range, the fusion pose can be confirmed to be credible, and the step S8 is executed, otherwise, the fusion pose is considered not to pass the first credibility judgment, and the step S5 is executed.
And S5, obtaining the current third pose and confidence of the robot by adopting a pose detection algorithm based on the neural network model.
Specifically, in order to reduce the calculation overhead of the processor, in the embodiment, the calculation process of this step is executed only when the fusion pose is determined to be unreliable in the step S4, in consideration of the higher requirement on the calculation force by the pose detection algorithm based on the neural network model, so as to reduce the calculation amount as much as possible.
The current third pose of the robot and the confidence degree can be calculated through the neural network model, and the confidence degree shows the confidence degree of the calculated third pose.
And S6, secondarily judging whether the fusion pose is credible, if so, executing the step S8, and if not, executing the step S7.
Specifically, the difference between the fusion pose and the third pose may be compared, if the difference is within a set difference range and the confidence level is not lower than a set confidence level threshold, the fusion pose is finally determined to be reliable, and step S8 is performed. Otherwise, finally determining that the fusion pose is not credible, and executing step S7.
Step S7, discarding the current frame.
Specifically, upon determining that the fusion pose is not authentic, the current frame may be discarded.
And S8, confirming the pose effective frames.
Specifically, when the fusion pose is determined to be reliable, a valid frame of the current pose can be confirmed.
And S9, outputting the latest effective frame, and storing the fusion pose as the current effective pose of the robot.
According to the method provided by the embodiment, when the SLAM algorithm, the pose detection algorithm based on the special mark and the pose detection algorithm based on the neural network model are adopted at the same time, the results of the two algorithms can be fused, the reliability judgment of the first fusion pose is carried out, if the reliability of the fusion pose is judged, the valid frame of the pose can be confirmed, and the latest fusion pose is output. If the fusion pose is not trusted for the first time, a third pose detection algorithm can be further called to conduct secondary credibility judgment on the fusion pose, if the secondary credibility judgment result is credible, the fusion pose can be still considered to be credible, and if the secondary credibility judgment result is not credible, the fusion pose is considered to be credible indeed, and the current frame can be discarded. Under the condition of reducing the calculated amount of the processor as much as possible, the method can ensure the accuracy of the fusion pose through fusion and cross verification of various algorithms.
Referring further to fig. 4, which illustrates another flowchart of a robot monitoring method using the three pose detection algorithms, as can be seen from comparing fig. 3 and fig. 4, step S10 may be further added between the above steps S8 and S9: judging whether the fusion pose is reasonable.
If the fusion pose is reasonable, executing the step S9, otherwise, executing the step S7.
The process of judging whether the fusion pose is reasonable specifically can include:
and determining a reasonable range of the current pose according to the planned target pose and the historical motion state of the robot. And judging whether the fusion pose is in a reasonable range of the current pose, if so, determining that the fusion pose is reasonable, and if not, determining that the fusion pose is unreasonable.
As shown in fig. 5, the forward direction is the planned forward direction, the dashed circle z5 is the planned target pose, the solid circle z1 is the pose at the previous time, the dashed circle z3 is the ideal current pose of the forward process, and the dashed circles z2 and z4 are the possible ranges (leftmost and rightmost) of the current pose, respectively.
The reasonable range of the current pose can be determined according to the planned target pose and the historical motion state of the robot. If the fusion pose is within the reasonable range of the current pose, the fusion pose accords with the historical motion track change trend of the robot, the pose change does not exceed the reasonable change range, namely the fusion pose is reasonable, otherwise, the fusion pose is unreasonable.
In this embodiment, after the reliability of the fusion pose is determined, the rationality of the fusion pose can be further determined, and when the fusion pose is reasonable, the process of storing the fusion pose as the current effective pose of the robot can be performed, otherwise, the current frame can be discarded, so that the accuracy of monitoring the pose of the robot is further improved.
In some embodiments of the present application, another alternative of the robot monitoring method is provided, and on the basis of the foregoing monitoring of the pose of the robot, the operation state of the robot may be further monitored and analyzed in this embodiment.
Specifically, the navigation planning unit of the robot plans a theoretical running state track of the robot in advance, wherein the running state track comprises time stamps and running state information corresponding to each time stamp. The actual running state track of the robot may be different from the theoretical running state track due to obstacles, algorithms, physical factors and the like. The application can compare the difference between the actual running state track and the planned theoretical running state track, evaluate the difference, and output prompt information if the difference exceeds the set tolerance. For example, a warning message is sent out, or an error message is sent out.
The running state of the robot can comprise various types, including but not limited to the following types:
1) Internal state classes: whether various sensors are on-line, system operating conditions, etc.
2) Action state class: the moving speed of the robot, triggering obstacle avoidance actions and the like.
3) Event state class: whether the robot collides, whether the robot is avoiding an obstacle, and the like.
Further, the embodiment of the application also provides a scheme for carrying out abnormality cause analysis on abnormal states, which can carry out association analysis on different states and trace the cause of abnormality, compared with the conventional scheme for independently detecting each state, the method can detect the cause of abnormality more quickly and accurately.
The specific process may include:
s1, determining an abnormal target state according to the difference between the actual running state track and the planned theoretical running state track.
Specifically, two motion state tracks are compared, and if the actual operation state and the theoretical operation state of the same timestamp are too different, the corresponding target state can be determined to be abnormal.
S2, determining each relevant state associated with the target state and the subordinate relation among the relevant states according to the association relation among the preset states.
Specifically, according to the embodiment of the application, the association relationship between different states can be preconfigured according to the relationship between the different states, for example, for event state type "collision", the association relationship is sequentially associated with the following states according to the subordinate relationship:
(1) event state class: whether a movement task is being performed, etc.
(2) Action state class: robot speed, robot pose, obstacle speed, etc.
(3) Internal state classes: IMU, lidar, RGB camera operating status, etc.
Further, each relevant state associated with the target state, and the dependencies between the relevant states, may be found.
And S3, traversing each relevant state in turn according to the subordinate relation, calling attribute information of attribute items of the relevant states, and analyzing reliability of the attribute items based on the attribute information.
Specifically, for each relevant state, attribute information of an attribute term of the relevant state may be called. Wherein different correlation states may correspond to different attribute items, such as: for the event state class, namely whether the mobile task is being executed, the attribute item is the event state class, and the attribute information corresponding to the attribute item is yes or no. For another example, for the internal state class "lidar operating state", the attribute terms may include: frequency of transmission, whether to stand by, whether to be abnormal, whether to be normal, whether to issue messages externally, etc.
For each relevant state, a reliability analysis may be performed on the attribute items based on the attribute information of the relevant state. Wherein, the attribute information of the attribute items of the related states can comprise historical attribute information, so that reliability analysis is facilitated.
Taking the action state type "robot speed" as an example, the attribute value of the attribute term is a specific speed value, and if the analysis finds that the robot speed value exceeds the set speed upper limit, it can be determined that the attribute term "robot speed" is abnormal, and the reliability condition is not satisfied.
S4, screening target attribute items which do not meet the reliability condition, and acquiring abnormal labels corresponding to the target attribute items.
Specifically, for the target attribute items that do not satisfy the reliability condition obtained by the above analysis, the corresponding abnormal tags thereof may be obtained. Each attribute item may be preconfigured with a corresponding anomaly tag, for example, for an attribute item of "robot speed", the anomaly tag configured may be "anomaly speed".
S5, forming an abnormality reason set of the target state by the abnormality tags.
By analyzing the above steps, it is possible to obtain the abnormality tags in the respective relevant states associated with the abnormal target state, and to form the abnormality cause set of the target state from the abnormality tags.
According to the method provided by the embodiment, through pre-establishing the association relation among the states, after the abnormality of a certain target state is found, the possible abnormality cause can be traced back quickly based on the association relation.
Next, taking an abnormal state of "collision" as an example, the above-described process will be described:
firstly, according to the configured state association relation, the related states of the collision association can be obtained, wherein the related states comprise:
(4) event state class: whether a movement task is being performed, etc.
(5) Action state class: robot speed, robot pose, obstacle speed, etc.
(6) Internal state classes: IMU, lidar, RGB camera operating status, etc.
Generating an anomaly analysis flow according to the dependency relationship among the related states:
1. first trace back the event state class being processed—whether a mobile task is being performed. If the robot is executing a movement task, the start anomaly is marked with a label of "active movement collision". Otherwise, the label of 'passive collision' is marked.
2. Then trace back to the action state class-speed, pose, etc. If the robot overspeed and pose change abnormality occur (such as pose repositioning and positioning abnormality of the robot just triggered), corresponding labels are marked, such as: "abnormal speed", "abnormal positioning", etc.
3. And finally tracing back to the internal state class. Checking whether the internal state is abnormal, such as: the laser radar generates message delay, the IMU generates drift, and the like. And similarly, marking corresponding abnormal labels.
In some embodiments of the present application, another alternative of the robot monitoring method is provided, and on the basis of the foregoing solution, the movement track of the robot may be further monitored and analyzed in this embodiment.
As shown in fig. 6, the monitoring process includes:
storing the obtained effective pose of each time stamp of the robot in a set time period before the current time into a real-time pose queue, and storing the planned target pose of each time stamp of the robot into a rule-controlled pose queue.
And aligning the real-time pose queue with the regular pose queue according to the time stamps, and calculating pose differences of the time stamps.
And judging whether the current pose of the robot needs to be corrected based on the pose difference, and if so, sending a correction instruction to the robot.
The method for monitoring and analyzing the movement track of the robot can provide the most basic operation guarantee when the algorithms such as the navigation and obstacle avoidance of the robot are unstable in the initial stage, and the robot is not frequently interrupted due to frequent track abnormality, so that the problem of low testing/operation efficiency is solved. Meanwhile, by combining the anomaly tracing mechanism of the embodiment, an error test set can be well formed and used for algorithm iteration.
The following describes a robot monitoring device provided by an embodiment of the present application, and the robot monitoring device described below and the robot monitoring method described above may be referred to correspondingly.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a robot monitoring device according to an embodiment of the present application.
As shown in fig. 7, the apparatus may include:
a pose calculation unit 71, configured to obtain current pose detection results of the robot by using at least two pose detection algorithms, where the at least two pose detection algorithms include a pose detection algorithm based on a sensor of the robot and a pose detection algorithm based on an external sensor;
a pose fusion unit 72, configured to fuse pose detection results of the at least two pose detection algorithms to obtain a fused pose, and verify whether the fused pose is trusted based on the pose detection results of each of the pose detection algorithms;
and an effective pose storage unit 73, configured to store the fusion pose as a current effective pose of the robot when the fusion pose is trusted.
Optionally, at least two pose detection algorithms adopted by the pose calculation unit include:
synchronous positioning and map construction SLAM algorithm, visual positioning detection algorithm based on special mark.
The visual positioning detection algorithm based on the special mark comprises the following steps: pose detection algorithm based on ArUco mark.
Further optionally, at least two pose detection algorithms adopted by the pose calculation unit further include:
the pose detection algorithm based on the deep neural network model comprises the following training samples in training of the deep neural network model: the current frame trains the picture, the target pose and the historical pose of the robot, and the sample label is the pose of the robot at the current moment. On the basis, the pose calculation unit adopts a pose detection algorithm based on a deep neural network model to obtain the current pose detection result of the robot, and the process comprises the following steps:
acquiring a current frame picture, a target pose corresponding to the current frame and a historical pose of the robot to form input state data;
and sending the input state data into the trained deep neural network model to obtain a current pose detection result of the robot, which is output by the model.
Optionally, the process of obtaining the current pose detection result of the robot by the pose calculation unit using at least two pose detection algorithms may include:
firstly, the SLAM algorithm is adopted to obtain the current first pose of the robot, and the visual positioning detection algorithm based on special marks is adopted to obtain the current second pose of the robot;
When a first instruction sent by the pose fusion unit is received, a pose detection algorithm based on the neural network model is further adopted to obtain the current third pose and confidence of the robot;
the above-mentioned pose fusion unit fuses the pose detection results of the at least two pose detection algorithms to obtain a fusion pose, and verifies whether the fusion pose is credible based on the pose detection results of the pose detection algorithms, including:
performing pose fusion voting on the first pose and the second pose to obtain a fusion pose;
performing first credibility judgment on the fusion pose according to the respective difference between the fusion pose and the first pose and the second pose;
if the first credibility judgment determines that the fusion pose is credible, finally determining that the fusion pose is credible; and if the first credibility judgment determines that the fusion pose is not credible, sending the first instruction to the pose calculation unit, comparing the difference between the fusion pose information and the third pose after the current third pose and the confidence coefficient of the robot are calculated by the pose calculation unit, if the difference is in a set difference range, and the confidence coefficient is not lower than a set confidence coefficient threshold value, finally determining that the fusion pose is credible, otherwise, finally determining that the fusion pose is not credible.
Optionally, the apparatus of the present application may further include:
the rationality judging unit is used for determining the reasonable range of the current pose according to the planned target pose and the historical motion state of the robot when the fused pose is determined to be credible before the effective pose storage unit stores the fused pose as the current effective pose of the robot; and judging whether the fusion pose is within a reasonable range of the current pose, if so, determining that the fusion pose is reasonable, jumping to an effective pose storage unit, and if not, determining that the fusion pose is unreasonable, and discarding the current frame.
Optionally, the process of performing pose fusion voting on the first pose and the second pose by the pose fusion unit to obtain a fused pose includes:
setting covariance matrixes of the first pose and the second pose;
and calculating a filtering fusion result of the first pose and the second pose through Kalman filtering and the covariance matrix to obtain a fusion pose.
Optionally, the apparatus of the present application may further include:
the running state track acquisition unit is used for acquiring the actual running state track of the robot, and the running state track comprises a time stamp and running state information corresponding to each time stamp;
And the track comparison unit is used for comparing the difference between the actual running state track and the planned theoretical running state track, and outputting prompt information if the difference exceeds the set tolerance.
Optionally, the above operation state information includes any one or more of the following:
internal state class, action state class, event state class.
Optionally, the apparatus of the present application may further include:
the abnormal state determining unit is used for determining an abnormal target state according to the difference between the actual running state track and the planned theoretical running state track;
the association state determining unit is used for determining each associated state associated with the target state and the subordinate relation among the associated states according to the association relation among the pre-configured states;
the association state attribute item analysis unit is used for traversing each association state in turn according to the subordinate relation, calling attribute information of attribute items of the association state, and carrying out reliability analysis of the attribute items based on the attribute information;
the abnormal label acquisition unit is used for screening target attribute items which do not meet the reliability conditions and acquiring abnormal labels corresponding to the target attribute items;
An anomaly cause determining unit, configured to form an anomaly cause set of the target state from each anomaly tag.
Optionally, the apparatus of the present application may further include: the track monitoring unit is used for executing the following procedures:
storing the obtained effective pose of each time stamp of the robot in a set time before the current time into a real-time pose queue, and storing the planned target pose of each time stamp of the robot into a rule-controlled pose queue;
aligning the real-time pose queue and the gauge pose queue according to time stamps, and calculating pose differences of the time stamps;
and judging whether the current pose of the robot needs to be corrected based on the pose difference, and if so, sending a correction instruction to the robot.
The robot monitoring device provided by the embodiment of the application can be applied to robot monitoring equipment, and the robot monitoring equipment can be a monitored robot, a base station or other terminals which are communicated with the monitored robot. Alternatively, fig. 8 shows a block diagram of a hardware structure of the robot monitor apparatus, and referring to fig. 8, the hardware structure of the robot monitor apparatus may include: at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
In the embodiment of the application, the number of the processor 1, the communication interface 2, the memory 3 and the communication bus 4 is at least one, and the processor 1, the communication interface 2 and the memory 3 complete the communication with each other through the communication bus 4;
processor 1 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present application, etc.;
the memory 3 may comprise a high-speed RAM memory, and may further comprise a non-volatile memory (non-volatile memory) or the like, such as at least one magnetic disk memory;
wherein the memory stores a program, the processor is operable to invoke the program stored in the memory, the program operable to:
at least two pose detection algorithms are adopted to respectively obtain the current pose detection result of the robot, wherein the at least two pose detection algorithms comprise a pose detection algorithm based on a sensor of the robot and a pose detection algorithm based on an external sensor;
fusing the pose detection results of the at least two pose detection algorithms to obtain fused poses, and verifying whether the fused poses are credible or not based on the pose detection results of the pose detection algorithms;
And when the fusion pose is credible, storing the fusion pose as the current effective pose of the robot.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
The embodiment of the present application also provides a storage medium storing a program adapted to be executed by a processor, the program being configured to:
at least two pose detection algorithms are adopted to respectively obtain the current pose detection result of the robot, wherein the at least two pose detection algorithms comprise a pose detection algorithm based on a sensor of the robot and a pose detection algorithm based on an external sensor;
fusing the pose detection results of the at least two pose detection algorithms to obtain fused poses, and verifying whether the fused poses are credible or not based on the pose detection results of the pose detection algorithms;
and when the fusion pose is credible, storing the fusion pose as the current effective pose of the robot.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the present specification, each embodiment is described in a progressive manner, and each embodiment focuses on the difference from other embodiments, and may be combined according to needs, and the same similar parts may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (14)

1. A robot monitoring method, comprising:
at least two pose detection algorithms are adopted to respectively obtain the current pose detection result of the robot, wherein the at least two pose detection algorithms comprise a pose detection algorithm based on a sensor of the robot and a pose detection algorithm based on an external sensor;
fusing the pose detection results of the at least two pose detection algorithms to obtain fused poses, and verifying whether the fused poses are credible or not based on the pose detection results of the pose detection algorithms;
And when the fusion pose is credible, storing the fusion pose as the current effective pose of the robot.
2. The method of claim 1, wherein the at least two pose detection algorithms comprise:
synchronous positioning and map construction SLAM algorithm, visual positioning detection algorithm based on special mark.
3. The method of claim 2, wherein the special mark-based visual location detection algorithm comprises: pose detection algorithm based on ArUco mark.
4. The method of claim 2, wherein the at least two pose detection algorithms further comprise:
the pose detection algorithm based on the deep neural network model comprises the following training samples in training of the deep neural network model: training a picture, a target pose and a historical pose of the robot in the current frame, wherein a sample label is the pose of the robot at the current moment;
the process of obtaining the current pose detection result of the robot by adopting a pose detection algorithm based on a deep neural network model comprises the following steps:
acquiring a current frame picture, a target pose corresponding to the current frame and a historical pose of the robot to form input state data;
And sending the input state data into the trained deep neural network model to obtain a current pose detection result of the robot, which is output by the model.
5. The method of claim 4, wherein the steps of obtaining current pose detection results of the robot by using at least two pose detection algorithms, respectively, fusing the pose detection results of the at least two pose detection algorithms to obtain a fused pose, and verifying whether the fused pose is authentic based on the pose detection results of each of the pose detection algorithms, comprise:
obtaining a current first pose of the robot by adopting the SLAM algorithm, and obtaining a current second pose of the robot by adopting a visual positioning detection algorithm based on special marks;
performing pose fusion voting on the first pose and the second pose to obtain a fusion pose;
performing first credibility judgment on the fusion pose according to the respective difference between the fusion pose and the first pose and the second pose;
if the first credibility judgment determines that the fusion pose is credible, finally determining that the fusion pose is credible; if the first credibility judgment determines that the fusion pose is not credible, a pose detection algorithm based on the neural network model is adopted to obtain the current third pose and the confidence of the robot;
And comparing the difference between the fusion pose information and the third pose, if the difference is in a set difference range and the confidence degree is not lower than a set confidence degree threshold, finally determining that the fusion pose is credible, otherwise, finally determining that the fusion pose is not credible.
6. The method of claim 1, further comprising, prior to saving the fused pose as the current valid pose of the robot:
when the fusion pose is credible, determining a reasonable range of the current pose according to the planned target pose and the historical motion state of the robot;
judging whether the fusion pose is in the reasonable range of the current pose, if so, determining that the fusion pose is reasonable, executing the step of storing the fusion pose as the current effective pose of the robot, and if not, determining that the fusion pose is unreasonable, and discarding the current frame.
7. The method of claim 5, wherein performing pose fusion voting on the first pose and the second pose to obtain a fused pose comprises:
setting covariance matrixes of the first pose and the second pose;
and calculating a filtering fusion result of the first pose and the second pose through Kalman filtering and the covariance matrix to obtain a fusion pose.
8. The method as recited in claim 1, further comprising:
acquiring an actual running state track of the robot, wherein the running state track comprises a time stamp and running state information corresponding to each time stamp;
and comparing the difference between the actual running state track and the planned theoretical running state track, and outputting prompt information if the difference exceeds the set tolerance.
9. The method of claim 8, wherein the operational status information includes any one or more of:
internal state class, action state class, event state class.
10. The method as recited in claim 8, further comprising:
determining an abnormal target state according to the difference between the actual running state track and the planned theoretical running state track;
according to the association relation between the preset states, determining each related state associated with the target state and the subordinate relation between the related states;
traversing each related state in turn according to the subordinate relations, calling attribute information of attribute items of the related states, and analyzing reliability of the attribute items based on the attribute information;
Screening target attribute items which do not meet reliability conditions, and acquiring abnormal labels corresponding to the target attribute items;
and forming an abnormality reason set of the target state by each abnormality label.
11. The method according to any one of claims 1-10, further comprising:
storing the obtained effective pose of each time stamp of the robot in a set time before the current time into a real-time pose queue, and storing the planned target pose of each time stamp of the robot into a rule-controlled pose queue;
aligning the real-time pose queue and the gauge pose queue according to time stamps, and calculating pose differences of the time stamps;
and judging whether the current pose of the robot needs to be corrected based on the pose difference, and if so, sending a correction instruction to the robot.
12. A robot monitoring device, comprising:
the pose calculating unit is used for respectively obtaining current pose detection results of the robot by adopting at least two pose detection algorithms, wherein the at least two pose detection algorithms comprise a pose detection algorithm based on a sensor of the robot and a pose detection algorithm based on an external sensor;
The pose fusion unit is used for fusing pose detection results of the at least two pose detection algorithms to obtain fusion poses, and verifying whether the fusion poses are credible or not based on the pose detection results of the pose detection algorithms;
and the effective pose storage unit is used for storing the fusion pose as the current effective pose of the robot when the fusion pose is credible.
13. A robot monitoring device, comprising: a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the respective steps of the robot monitoring method according to any one of claims 1 to 11.
14. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the robot monitoring method according to any of claims 1-11.
CN202311286888.5A 2023-10-07 2023-10-07 Robot monitoring method, device, equipment and storage medium Pending CN117067261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311286888.5A CN117067261A (en) 2023-10-07 2023-10-07 Robot monitoring method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311286888.5A CN117067261A (en) 2023-10-07 2023-10-07 Robot monitoring method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117067261A true CN117067261A (en) 2023-11-17

Family

ID=88710132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311286888.5A Pending CN117067261A (en) 2023-10-07 2023-10-07 Robot monitoring method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117067261A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292329A (en) * 2023-11-24 2023-12-26 烟台大学 Method, system, medium and equipment for monitoring abnormal work of building robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292329A (en) * 2023-11-24 2023-12-26 烟台大学 Method, system, medium and equipment for monitoring abnormal work of building robot
CN117292329B (en) * 2023-11-24 2024-03-08 烟台大学 Method, system, medium and equipment for monitoring abnormal work of building robot

Similar Documents

Publication Publication Date Title
CN112734852B (en) Robot mapping method and device and computing equipment
US20160117824A1 (en) Posture estimation method and robot
EP3159126A1 (en) Device and method for recognizing location of mobile robot by means of edge-based readjustment
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN109099920B (en) Sensor target accurate positioning method based on multi-sensor association
JP5262705B2 (en) Motion estimation apparatus and program
CN114623817B (en) Self-calibration-contained visual inertial odometer method based on key frame sliding window filtering
CN117067261A (en) Robot monitoring method, device, equipment and storage medium
CN111027381A (en) Method, device, equipment and storage medium for recognizing obstacle by monocular camera
US10902610B2 (en) Moving object controller, landmark, and moving object control method
WO2018235219A1 (en) Self-location estimation method, self-location estimation device, and self-location estimation program
CN110942474A (en) Robot target tracking method, device and storage medium
US8213684B2 (en) Motion estimating device
CN116958452A (en) Three-dimensional reconstruction method and system
CN112729289B (en) Positioning method, device, equipment and storage medium applied to automatic guided vehicle
CN111812668B (en) Winding inspection device, positioning method thereof and storage medium
Rybski et al. Appearance-based minimalistic metric SLAM
CN112270357A (en) VIO vision system and method
JP6670712B2 (en) Self-position estimation device, moving object and self-position estimation method
KR101847113B1 (en) Estimation method and apparatus for information corresponding camera orientation by using image
CN117649619B (en) Unmanned aerial vehicle visual navigation positioning recovery method, system, device and readable storage medium
Hodges et al. Multistage bayesian autonomy for high‐precision operation in a large field
KR102583669B1 (en) Method for Marker Recognition and Pose Estimation of Mobile Robot and Apparatus Using the Same
Das et al. Sensor fusion in autonomous vehicle using LiDAR and camera sensor with Odometry
CN114102574B (en) Positioning error evaluation system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination