CN116494201A - Monitoring integrated power machine room inspection robot and unmanned inspection method - Google Patents

Monitoring integrated power machine room inspection robot and unmanned inspection method Download PDF

Info

Publication number
CN116494201A
CN116494201A CN202310246774.1A CN202310246774A CN116494201A CN 116494201 A CN116494201 A CN 116494201A CN 202310246774 A CN202310246774 A CN 202310246774A CN 116494201 A CN116494201 A CN 116494201A
Authority
CN
China
Prior art keywords
mechanical arm
machine room
inspection
target
mobile platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310246774.1A
Other languages
Chinese (zh)
Inventor
赵晓敏
魏居斌
施伟
杨晓磊
董方方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202310246774.1A priority Critical patent/CN116494201A/en
Publication of CN116494201A publication Critical patent/CN116494201A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the field of electric power automation equipment, and particularly relates to a monitoring integrated electric power machine room inspection robot and an unmanned inspection method. The inspection robot is used for carrying out normalized inspection on the electric power machine room and timely disposing a specific abnormal state, and comprises: the system comprises a mobile platform, a mechanical arm system, a depth camera system and an upper computer. The mobile platform comprises a positioning module and is provided with at least one laser radar. The mechanical arm system comprises a lower computer, a mechanical arm and an end execution mechanism. The upper computer is electrically connected with the mobile platform, the lower computer of the mechanical arm system and the depth camera system. And the upper computer is respectively provided with a target recognition model, a target positioning model and a space virtualization module. And the Navigation module and the mechanical arm track planning module are based on a Navigation function package. The invention solves the problems of low automation degree, high difficulty of the automatic power inspection robot, high realization cost and the like of the existing power inspection robot.

Description

Monitoring integrated power machine room inspection robot and unmanned inspection method
Technical Field
The invention belongs to the field of electric power automation equipment, and particularly relates to a monitoring integrated electric power machine room inspection robot and an unmanned inspection method.
Background
The electric power machine room is a place for providing all application service operation for the whole power grid system, and comprises a transformer substation, a power distribution substation and the like; the electric power machine room is a platform for seamlessly integrating data interactive connection bridges and application services of power generation, power transmission, power distribution and electricity selling in a power grid. The electric power machine room is internally provided with a large number of electric power equipment which continuously works for 24 hours, so that daily inspection is required to be carried out on the electric power machine room, possible abnormality or damage of the electric power equipment is timely found and eliminated, and the normal power utilization of vast power users in the platform area is ensured.
At present, most of the transportation and inspection of the electric power machine room adopts a manual mode. The inspection method is long in time and low in efficiency, the reliability of inspection results mainly depends on personal quality of technicians, and the accuracy is relatively low. Under the condition that the number of inspection staff is relatively small, the sufficient time is difficult to ensure in daily inspection work to comprehensively inspect the internal and external environment, equipment operation condition, optical cable fiber core use condition, power hidden danger and the like of each electric power machine room. Once a dead zone is left in daily inspection, a large potential safety hazard may occur. In addition, for some special places in the electric power machine room, some toxic and harmful gas environments (such as SF 6) or electrical environments unsuitable for personnel to enter may exist, which also brings a risk to personal safety.
With the continuous development of automation and 5G high-speed communication technologies, technicians are putting the center of gravity of research and development into developing professional inspection robots. The robot replaces a manual work to execute the daily inspection task of the electric machine room. The defects in the aspects of efficiency, reliability, safety and the like of the electric power machine room in manual inspection can be overcome. The main points of robot inspection relative to manual inspection include: 1. the daily maintenance cost of the inspection robot is far lower than the labor cost. 2. The execution efficiency of the inspection robot is higher than the manual efficiency, and the inspection robot can work continuously for a long time. 3. The inspection structure of the inspection robot is higher in reliability and accuracy, and can be kept free from omission. 4. The inspection robot may perform inspection or disposal tasks that some personnel cannot perform.
Although the electric inspection robot has very bright prospect, the difficulty of project research and development is very high. For example, how to control a robot to move in an environment of an electric machine room containing a large number of complex devices and pipelines, enables efficient autonomous operation. How to train the robot to identify and check a large number of devices, and accurately complete data acquisition and analysis. How to accurately control the motion of the robot, and avoid damage to the normal part of the power equipment when performing abnormal treatment. Are all technical problems that the skilled person must solve. For the above reasons, there is no commercially available power inspection robot product in the market at present, and some robots with practical experience are mainly robots remotely controlled by technicians, and these "semi-automatic" robots are mainly used for completing inspection or disposal tasks that cannot be completed by some manpower.
Disclosure of Invention
The invention provides a monitoring integrated power machine room inspection robot and an unmanned inspection method, and aims to solve the problems that an existing power inspection robot is low in automation degree, high in automation power inspection robot difficulty, high in implementation cost and the like.
The invention is realized by adopting the following technical scheme:
the utility model provides a robot is patrolled and examined to control integrated electric power computer lab, it is used for carrying out the inspection of normalization to electric power computer lab to in time handle specific abnormal state, this robot is patrolled and examined to electric power computer lab includes: the system comprises a mobile platform, a mechanical arm system, a depth camera system and an upper computer.
The mobile platform is used as a running mechanism of the inspection robot of the electric power machine room; the mobile platform comprises a positioning module and is provided with at least one laser radar. The positioning module is used for acquiring geographic coordinates of the mobile platform in the electric machine room; lidar is used to detect obstacles in the direction of motion.
The mechanical arm system comprises a lower computer, a mechanical arm and an end execution mechanism. The lower computer is used for controlling the pose and the motion state of the mechanical arm, and is also used for controlling the action of the tail end executing mechanism. The fixed end of the mechanical arm is loaded on the movable platform, and the free end is used for installing the end actuating mechanism. The end effector is configured to perform a corresponding manipulation task by performing a specific action.
The depth camera system is arranged at the free end of the mechanical arm, and any motion trail of the depth camera system and the end actuating mechanism does not interfere. The view range of the depth camera includes the end effector and its corresponding manipulation object.
The upper computer is electrically connected with the mobile platform, the lower computer of the mechanical arm system and the depth camera system. The upper computer is respectively operated with: a target recognition model and a target positioning model based on the YOLO V5 design. And a space virtualization module based on SLAM algorithm. A Navigation module based on Navigation function package. And a mechanical arm track planning module based on Moveit and a kinematic plug-in IKFAST. The upper computer is used for respectively completing the following work: (1) And generating a virtualization model representing the spatial layout of the machine room and the internal equipment thereof through a spatial virtualization module. (2) And generating pose control instructions of the corresponding mechanical arms at each track point in advance through a mechanical arm track planning module according to the spatial position relation between the depth camera corresponding to each track point in the track to be patrolled and the target object to be patrolled. (3) And combining data acquired by the laser radar and the depth camera in real time, generating an optimized motion track of a mobile platform through a navigation module in a patrol stage, and generating a series of corresponding motion control instructions. (4) And carrying out target recognition on the depth image acquired by the depth camera in real time through the target recognition model, and then calculating the position of the target through the target positioning model after the target is recognized. (5) When the manipulation task is executed, the position information of the target and the state data of the mechanical arm system are input into the mechanical arm track planning module, the optimal motion track of the end actuating mechanism is solved, and a motion information queue corresponding to the mechanical arm system is generated.
In the scheme provided by the invention, the mobile platform is AGV equipment, and can adopt a wheeled robot chassis, a crawler robot chassis or any other bionic walking robot which is mature in technology and widely commercially available in the market. In addition, the mobile platform further comprises a remote control device, wherein the remote control device is used for manually controlling the running state of the mobile platform, and the priority of a motion control instruction issued by the remote control device is higher than that of the upper computer.
As a further improvement of the invention, the number of the laser radars in the mobile platform is plural and distributed at the circumferential position of the mobile platform. The positioning module selects products based on GPS positioning, base station positioning or base station and wifi hybrid positioning technology. The positioning module preferentially adopts products of a hybrid positioning technology with higher indoor positioning precision according to the indoor application environment.
As a further improvement of the invention, the mechanical arm adopts a mechanical arm with six degrees of freedom; the tail end executing mechanism of the mechanical arm system adopts a mechanical clamping jaw or a bionic multi-finger type mechanical arm. The surfaces of the mechanical clamping jaw or the mechanical arm are subjected to insulation and wear resistance enhancement treatment.
As a further improvement of the invention, the two-way transmission of the instruction and the data is completed between the upper computer and the mobile platform, between the upper computer and the mechanical arm system and between the upper computer and the depth camera system by adopting a Socket communication mode based on Ethernet or wireless.
As a further improvement of the invention, the target recognition model is obtained by training a YOLO V5 basic model, and the input of the target recognition model is an original RGB-D image acquired by a depth camera; the output of the object recognition model is an RGB-D image containing a selection box corresponding to the recognition object.
After the target recognition module outputs the sample image containing the target selection frame, the target positioning model calculates the spatial position of the target object through the following steps:
(1) And calculating a first coordinate of the target object relative to the center of the camera lens according to the pixel position of the identified selection frame of the target object in the RGB-D image and the depth information of the pixel area.
(2) According to the installation position of the depth camera on the mechanical arm system, converting the first coordinate of the target object into a second coordinate, wherein the second coordinate takes the free end of the mechanical arm as an origin.
(3) And converting the second coordinate into an absolute coordinate under a world coordinate system according to the real-time position of the electric machine room inspection robot in the electric machine room to obtain the actual space position of the target object.
As a further improvement of the invention, the upper computer also comprises a feature matching model based on a graph convolution neural network and an OCR character recognition model. The characteristic matching model is used for carrying out characteristic matching on the identified specific object and a typical state diagram of the object in the database, so as to determine the working state of the specific object. The OCR character recognition model is used for firstly carrying out image cutting on the local area of the recognized specific target, and then recognizing characters or symbols in the cut image; and further determining the text information in the target object.
As a further improvement of the invention, the navigation module adopts the fusion characteristics in the data acquired by the laser radar and the depth camera as the reference information in the process of optimizing the motion trail, and realizes the accurate identification and modeling of the obstacle by utilizing the SLAM algorithm.
As a further improvement of the invention, the upper computer also comprises a storage module, and the storage module is used for storing the following data: various original data collected by the electric machine room inspection robot in the process of executing the inspection task are analyzed and processed according to the inspection results obtained by the various original data, event information generated when various operation tasks are executed, and log files recorded after the inspection tasks are completed.
The invention also provides an unmanned inspection method of the electric machine room, which adopts the monitoring integrated inspection robot of the electric machine room to automatically inspect the electric machine room in a normalized way and actively treat certain specific abnormal events or fault states. The unmanned inspection method comprises the following steps:
1. initialization phase
S1: and in the initialized state, a motion control instruction is issued to the inspection robot of the electric machine room, and the channel inside the whole electric machine room is traversed.
S2: the navigation module in the upper computer generates an initialization patrol path according to a series of coordinate information recorded by the positioning module in the mobile platform.
The initialization patrol path further comprises marks corresponding to key equipment or devices to be inspected.
S3: and the mechanical arm track planning module pre-generates a pose control instruction of the mechanical arm corresponding to each track point in the initial inspection track according to the spatial position relation between the depth camera corresponding to each track point in the inspection track and the target object to be inspected.
The pose control instruction is used for controlling the mechanical arm system to operate cooperatively with the mobile platform, so that the depth camera system can acquire image data of all places or equipment in the electric machine room in the moving process of the mobile platform.
2. Normalized inspection stage
S4: after the inspection task is triggered periodically, the mobile platform of the electric machine room inspection robot moves in the circuit machine room according to the initialized inspection path.
S5: in the running process of the mobile platform, a space virtualization module in the upper computer carries out space virtualization modeling according to data acquired by the depth camera and the laser radar in real time, and a navigation module optimizes an initialization inspection path according to barrier information in the virtualization model; and then realize the self-adaptation navigation and the obstacle avoidance of electric power computer lab inspection robot.
S6: and the upper computer acquires or stores the image or video data acquired by the depth camera in real time, and identifies equipment or devices at the key nodes through the target identification model.
S7: and the upper computer performs feature comparison or character recognition on the identified target object, so as to determine the real-time running state of each key device or apparatus.
3. Event handling phase
When the operation state of any one of the devices or apparatuses is abnormal or the position of the operation object of the task issued by the electric power machine room control center is reached in the step S7, the following active handling operation is executed:
s8: position information of the operation object is calculated from the target positioning model.
S9: and the upper computer controls the mobile platform to approach the operation object according to the change of the depth information of the target object in the depth image.
S10: after the latest operation position is reached, the upper computer calculates the detailed position of the operation object through the target positioning module. And then solving the optimal motion trail of the end execution mechanism through a mechanical arm trail planning module by combining the state data of the mechanical arm uploaded by the upper computer, and finally generating a motion information queue corresponding to the mechanical arm system and sending the motion information queue to the lower computer.
S11: the lower computer controls the mechanical arm to move according to the motion information queue, and the tail end executing mechanism executes corresponding actions, so that the treatment of the abnormal event or the implementation of the designated task is completed.
S12: after the inspection of all key nodes in the initialized inspection path is completed, the current inspection task is finished, and the upper computer generates a work log.
The technical scheme provided by the invention has the following beneficial effects:
the invention provides an information electric machine room inspection robot comprising a mechanical arm, a moving platform and a depth camera. And the work of image recognition, state analysis, path planning, motion control and the like which are needed to be executed by the robot in the process of executing the inspection task is completed through an upper computer which is independent of the robot action mechanism body. And then the inspection robot can independently complete various inspection and emergency treatment works in the electric power machine room, and the automation degree of the robot is greatly improved. In addition, in the scheme of the invention, the upper position can also support the cooperative control of a plurality of robots, so that the deployment cost of the whole robot system is reduced, and the working efficiency is further improved through the plurality of robots.
According to the scheme, the depth camera is arranged at the tail end of the mechanical arm, and can acquire multi-angle images of objects and depth information of target objects, so that the recognition accuracy of the upper computer on the target objects can be improved. Meanwhile, the depth image can also improve the navigation precision of the robot and increase the obstacle avoidance performance of the robot in a complex electric machine room.
The robot in the scheme of the invention has a fault handling function, an object can be clamped and loosened by the manipulator, and the manipulator can move and rotate along with the change of the position and the pose of the tail end by installing the manipulator at the tail end of the manipulator. The robot can control objects in the electric power machine room, including opening and closing of the switch, rotation of the button and the like, and can close the switch in emergency, so that the inspection safety is improved.
The method adopts the YOLOv5 algorithm to identify complex objects and identifies the states of various objects such as meters, switches and the like in a machine room. The end-to-end detection is adopted, and the target positioning and target classifying tasks are combined into one stage for carrying out, so that the time required by target detection is greatly reduced, and the relative error of identification is small.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
Fig. 1 is a schematic diagram of a hardware architecture of a monitoring-integrated inspection robot for an electric power machine room according to embodiment 1 of the present invention.
Fig. 2 is a schematic structural diagram of an M8DP type AGV cart used by a mobile platform in the process of device development.
Fig. 3 is a schematic structural diagram of a six-degree-of-freedom mechanical arm used in the device development process.
Fig. 4 is a schematic diagram of a two-finger gripper used in the equipment development process.
Fig. 5 is a flowchart of a series of operations of performing object recognition, positioning, and exception handling by the inspection robot for a power machine room according to an embodiment of the present invention.
Fig. 6 is a process data record of the target recognition process of developing the meter dial based on YOLO V5 in the present embodiment.
Fig. 7 and 8 are simulation image examples of the space object modeling by SLAM in embodiment 1 of the present invention.
Fig. 9 is a schematic diagram of a simulation process for controlling a task of the mechanical arm to grasp a target object.
Fig. 10 is a functional block diagram of a host computer in embodiment 1 of the present invention.
Fig. 11 is a flowchart of the steps of the unmanned inspection method provided in embodiment 2 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
The embodiment provides a control integrated electric power machine room inspection robot, it is used for carrying out normalized inspection to electric power machine room to in time handle specific abnormal state, as shown in fig. 1, this electric power machine room inspection robot includes: the system comprises a mobile platform, a mechanical arm system, a depth camera system and an upper computer. The mobile platform and the mechanical arm system form an operation main body with the front end capable of freely moving, and the upper computer is a control center and a data processing center at the rear end. In the system layer, the upper computer, the mobile platform, the mechanical arm system and the depth camera system complete bidirectional transmission of instructions and data in a Socket communication mode; and the hardware layer can adopt Ethernet or Bluetooth, wiFi and other wireless communication means to realize interaction among devices.
The mobile platform is used as a running mechanism of the inspection robot of the electric power machine room; in effect an AGV device. The mobile platform can select a wheeled robot chassis, a crawler robot chassis and the like which are mature in technology and are widely commercially available in the market. Any other bionic foot walking robot and the like can be selected, for example, mechanical dogs which are developed by a plurality of companies at home and abroad and can stably walk under different terrains are representatives of the foot walking robot.
The bionic foot-type mechanism has good flexibility and small occupied space, but is relatively complex in control algorithm, low in working stability, low in advancing speed and high in cost. If the crawler-type travelling mechanism is adopted, the crawler-type travelling mechanism operates stably, the traction force is large, but the chassis area is larger and heavier. Therefore, the wheel type travelling mechanism is adopted in the embodiment, so that the flexibility is considered, the stability of movement is guaranteed, the moving speed is high, the mobility is high, the environmental adaptability is strong, and the wheel type travelling mechanism is suitable for indoor working environments with simpler and friendly terrains such as an electric machine room. As shown in fig. 2, the settlement of the scheme development design in this embodiment mainly adopts an M8DP type AGV trolley developed by Shanghai carpentry robot company as a moving chassis of the robot.
In addition, in the mobile platform provided in this embodiment, a positioning module and at least one lidar should also be installed. The positioning module is used for acquiring geographic coordinates of the mobile platform in the electric machine room; lidar is used to detect obstacles in the direction of motion. The mobile platform further comprises a remote control device, wherein the remote control device is used for manually controlling the running state of the mobile platform, and the priority of a motion control instruction issued by the remote control device is higher than that of the upper computer. Specifically, the number of lidars in the mobile platform of the present embodiment is plural, and is distributed at the circumferential position of the mobile platform. The positioning module can select products based on GPS positioning, base station positioning or base station and wifi hybrid positioning technology. The positioning module preferentially adopts products of a hybrid positioning technology with higher indoor positioning precision according to the indoor application environment.
The mechanical arm system comprises a lower computer, a mechanical arm and an end execution mechanism. The lower computer is used for controlling the pose and the motion state of the mechanical arm, and is also used for controlling the action of the tail end executing mechanism. The fixed end of the mechanical arm is loaded on the movable platform, and the free end is used for installing the end actuating mechanism. The end effector is configured to perform a corresponding manipulation task by performing a specific action.
The structural design of the mechanical arm part mainly considers the requirement that the robot has the function of closely observing and operating the equipment in the machine room, so that the cooperative mechanical arm structure with six degrees of freedom is decided to be adopted in the embodiment, and the mechanical arm adopts the joint modularized design, so that the grabbing action of the robot can be well controlled and regulated. Specifically, as shown in fig. 3, the present embodiment uses an AUBO-i5 light six degrees of freedom cooperative robot developed by the traveling technology as the required robot system.
The tail end executing mechanism of the mechanical arm system can adopt two-finger mechanical clamping jaws or bionic multi-finger mechanical arms; the actuating mechanism has simple structure and better flexibility. For a specific electric working environment, the mechanical clamping jaw or the mechanical arm is made of a high-performance resin material as shown in fig. 4; has stronger insulation and wear-resisting properties.
The depth camera system is arranged at the free end of the mechanical arm, and any motion trail of the depth camera system and the end actuating mechanism does not interfere. The view range of the depth camera includes the end effector and its corresponding manipulation object. In this embodiment, the depth camera employs a Kinect sensor developed by microsoft corporation.
The upper computer is electrically connected with the mobile platform, the lower computer of the mechanical arm system and the depth camera system. The upper computer is actually a small data server. According to the requirements of the system on the performance of the equipment, a technician can configure the corresponding computer system as a required upper computer. The normal function of the upper computer mainly depends on software running in the upper computer. Specifically, in this embodiment, the upper computer runs: a target recognition model and a target positioning model based on the YOLO V5 design. And a space virtualization module based on SLAM algorithm. A Navigation module based on Navigation function package. And a mechanical arm track planning module based on Moveit and a kinematic plug-in IKFAST, etc.
Based on the newly developed software system, the upper computer of this embodiment is used for completing the following tasks respectively: (1) And generating a virtualization model representing the spatial layout of the machine room and the internal equipment thereof through a spatial virtualization module. (2) And generating pose control instructions of the corresponding mechanical arms at each track point in advance through a mechanical arm track planning module according to the spatial position relation between the depth camera corresponding to each track point in the track to be patrolled and the target object to be patrolled. (3) And combining data acquired by the laser radar and the depth camera in real time, generating an optimized motion track of a mobile platform through a navigation module in a patrol stage, and generating a series of corresponding motion control instructions. (4) And carrying out target recognition on the depth image acquired by the depth camera in real time through the target recognition model, and then calculating the position of the target through the target positioning model after the target is recognized. (5) When the manipulation task is executed, the position information of the target and the state data of the mechanical arm system are input into the mechanical arm track planning module, the optimal motion track of the end actuating mechanism is solved, and a motion information queue corresponding to the mechanical arm system is generated.
In the hardware system of the electric machine room inspection robot developed in this embodiment, after a series of processes such as depth segmentation, feature extraction, target recognition and positioning are performed on the RGB image acquired by the Kinect sensor, motion planning may be performed according to the obtained spatial position information of the object to be recognized, and the mechanical arm motion information queue obtained by the motion planning may be transmitted to the lower computer through Socket communication. The lower computer receives and analyzes the motion queue information of the mechanical arm, further drives the mechanical arm to execute motion and grasp according to the planned track, and transmits the real-time pose information of the mechanical arm back to the upper computer.
And in the aspect of a software control system, the embodiment adopts an open source ROS system to control the inspection robot. The ROS control system is a robot software platform, and provides various development environments for developing application programs specially for robots. ROS can be divided into two layers, the bottom layer is an operating system layer, and the upper layer is various software packages with various functions including SLAM mapping, navigation, moveit mechanical arm control and the like. Based on SLAM technology, the position and the obstacle condition of the robot are determined, navigation function is realized by using a Navigation function package, the robot is controlled to move to the target position, and obstacle avoidance function is realized. And meanwhile, the positioning and identifying functions of the mechanical arm on the instrument panel of the machine room are completed by utilizing machine vision identifying software developed based on YOLO V5, so that the aim of automatic inspection is fulfilled. The complete flow of target recognition, positioning and exception handling in this embodiment is generally shown in fig. 5.
The target recognition model is obtained by training a YOLO V5 basic model, and the input of the target recognition model is an original RGB-D image acquired by a depth camera; the output of the object recognition model is an RGB-D image containing a selection box corresponding to the recognition object. In this embodiment, the relevant network model for target recognition and positioning is developed by utilizing YOLO V5, and is especially aimed at the task of target recognition and positioning under low light conditions. Many technical solutions already exist in practice for this class of scenario, but most are not mature. For example, wu Z proposes an image conversion optimization network based on a loop generation countermeasure network. They redesign the discriminator network of CycleGAN, add additional discriminators, optimize parts of the network such as the loss function, and add the target detection network after the network conversion. Wang proposes an image enhancement method to improve the low-light image quality. First, the image brightness is mapped to a desired level by a hyperbolic tangent curve. Secondly, for the anti-sharpening filter in YCbCr color space, block matching and three-dimensional filtering methods are proposed for image denoising and sharpening. And finally, detecting by using a convolutional neural network model to finish the monitoring task. Kuang developed a nighttime image enhancement method by modeling adaptive feedback from horizontal cells and centrally enclosed antagonistic receptive fields of bipolar cells. On the basis, the feature extraction is carried out on the classifier by using a convolutional neural network, a gradient direction histogram and a local binary pattern, and the classifier is trained by using a support vector machine. However, the existing research has great actual operation difficulty, high performance requirements on hardware and software, obviously insufficient real-time performance of a network model and high cost. The target recognition and positioning model developed based on the YOLO V5 algorithm can avoid the problems, and meanwhile, the RGB-D image obtained by the depth camera has a gain in improving the recognition precision and efficiency of the target.
Taking dial identification of instruments as an example, the process of designing and training a target identification network by using the YOLO V5 algorithm in this embodiment is approximately as follows: first, four folders of the items, images, imageSets, and labels are newly built under the data directory. The images store an original picture data set, the images store xml files generated after marking, the labels store txt files for storing marking content, and the images store classification conditions of training data sets and test data sets. Then, preparing a dashboard real object to be identified, taking about 200 photos from different angles, marking all the photos by using a labelImg marking tool, and putting all the generated xml files into an places folder. Next, files maketxt.py and voc_label.py are newly built under the root directory of YOLOV5, and maketxt.py and voc_label.py are run, respectively. The makeTxt.py is mainly used for classifying the data sets into training data sets and test data sets, four files can appear in the imageset folder after operation, and the generated picture names voc_label.py of the training data sets and the test data sets are mainly used for reading out the labeling information in the xml files marked by the picture data sets and writing the labeling information into the txt files, and the labeling information of all the picture data sets appears in the labels folder after operation. The yaml file in the aspect of the data set is modified, and finally, some parameters in the train.py are modified in the root directory, and after all the parameters are configured, the train.py file is directly executed to start training. After training, we get two files, best. Pt and last. Pt, in the weights folder. By running the two files, the identification of the instrument panel in the machine room can be completed. FIG. 6 is a process data record of the present embodiment during the design of the target identification network of the dashboard. Correspondingly, the network model training mode can be performed in a similar mode by performing target identification on other devices in the electric power machine room.
After the target recognition module outputs the sample image containing the target selection frame, the target positioning model calculates the spatial position of the target object through the following steps:
(1) And calculating a first coordinate of the target object relative to the center of the camera lens according to the pixel position of the identified selection frame of the target object in the RGB-D image and the depth information of the pixel area.
(2) According to the installation position of the depth camera on the mechanical arm system, converting the first coordinate of the target object into a second coordinate, wherein the second coordinate takes the free end of the mechanical arm as an origin.
(3) And converting the second coordinate into an absolute coordinate under a world coordinate system according to the real-time position of the electric machine room inspection robot in the electric machine room to obtain the actual space position of the target object.
Another function of the depth camera in this embodiment is during path planning of the inspection robot. The embodiment adopts SLAM technology to realize space environment modeling and map generation. Traditional filter-based SLAM algorithms such as FastSlam reduce the computational complexity and have good robustness, but memory consumption is serious in a large-scale environment, and the dissipation of particles influences the construction of a map. In the embodiment, the SLAM algorithm based on the fusion of the RGB-D depth camera and the laser radar acquires depth information through multiple sensors and analyzes the pose of the equipment, so that a dense map can be constructed, and the system is suitable for indoor environments.
In the scheme design stage of the embodiment, the map construction of the SLAM of the robot is realized by utilizing the Gapping function package, the Gapping function package is installed, meanwhile, a Gapping node needs to be configured, and an Rviz starting environment is configured for a Gapping node file to serve as a demo. And starting a robot model with a laser radar in a gazebo environment, starting a Gapping demonstration file, and finally starting a keyboard control node. Fig. 7 and 8 are examples of spatial modeling of some typical scenarios in the development process according to the present embodiment.
In the navigation path optimization process, firstly, nodes of the move_base are required to be configured, and four yaml files related to the robot navigation simulation configuration are started by utilizing a starting file of the move_base and are mainly configured into a local planner yaml file and a common map planning yaml file. After the initial configuration is completed, starting a robot model Gazebo environment, then starting Gming and move_base control nodes, and finally loading the configured Rviz environment, and manually setting a target point position through '2D Nav gold'.
After a series of tasks such as target recognition, path planning and motion control of the mobile platform are completed, the other task is how to more finely complete each manipulation task through the mechanical arm and the end execution mechanism. The problems that the existing mechanical arm motion control is complicated in research, the calculated amount is large in the solving process, solving results are not solved, solving results fail and the like are easy to occur are considered. In the embodiment, the control of the mechanical arm is realized by calling an online Move It tool, and the track planning of the mechanical arm is realized by configuring a kinematic plug-in IKFAST. After the Ikfast algorithm is introduced, the operation state solving process of the mechanical arm can be very fast, the solving result accuracy is very high, and the result which cannot be solved by the traditional method can be solved.
The embodiment also simulates the grabbing and controlling process of the robot, after the robot simulation environment is started in the terminal, an Anacond PATH switch (export PATH=/home/hsy/anaconda 3/bin: $PATH) is opened in a basherc folder, in addition, a terminal is started to start YOLOV5 vision software, a source command to/basherc command is used for updating the basherc PATH, then a robot working space PATH is opened, then a source activity command and a conda activate mypytorch command are used for starting the operating space of YOLOV5, and then a launch file is used for starting the YOLOV5 interface.
And then calling a MoveIt interface, sending the object coordinate information after positioning to the MoveIt, autonomously planning a mechanical arm movement path through a reverse kinematics principle of a robot in the MoveIt to enable a mechanical arm terminal to move to a point to be grasped, controlling the mechanical arm to move to the upper side of the object, grasping downwards, controlling the mechanical arm to return to an initial gesture after closing and clamping a mechanical clamping jaw, and completing the simulation of a grasping function. Fig. 9 is a continuous process image of a simulation process.
In addition, in the more perfect scheme of the embodiment, as shown in fig. 10, the upper computer further comprises a feature matching model based on a graph convolution neural network and an OCR character recognition model. The characteristic matching model is used for carrying out characteristic matching on the identified specific object and a typical state diagram of the object in the database, so as to determine the working state of the specific object. For example, after the upper computer recognizes that the object to be detected is a relay, which working state the relay in the current image is in can be analyzed through feature matching. So as to make corresponding working decisions according to different working states.
The OCR character recognition model is used for firstly carrying out image cutting on the local area of the recognized specific target, and then recognizing characters or symbols in the cut image; and further determining the text information in the target object. For example, the OCR character recognition module can replace manual reading, and then the upper computer records the data. Meanwhile, the upper computer also comprises a storage module, and the storage module is used for storing the following data: various original data collected by the electric machine room inspection robot in the process of executing the inspection task are analyzed and processed according to the inspection results obtained by the various original data, event information generated when various operation tasks are executed, log files recorded after the inspection task is completed, and the like.
Example 2
In combination with the monitoring integrated power machine room inspection robot provided in embodiment 1, the embodiment further provides an unmanned inspection method of the power machine room, which adopts the monitoring integrated power machine room inspection robot in the embodiment to replace manual automatic inspection of the power machine room in a normalized manner and actively treat certain specific abnormal events or fault states.
As shown in fig. 11, the unmanned inspection method provided in this embodiment includes the following steps:
1. initialization phase
S1: and in the initialized state, a motion control instruction is issued to the inspection robot of the electric machine room, and the channel inside the whole electric machine room is traversed.
S2: the navigation module in the upper computer generates an initialization patrol path according to a series of coordinate information recorded by the positioning module in the mobile platform.
The initialization patrol path further comprises marks corresponding to key equipment or devices to be inspected.
S3: and the mechanical arm track planning module pre-generates a pose control instruction of the mechanical arm corresponding to each track point in the initial inspection track according to the spatial position relation between the depth camera corresponding to each track point in the inspection track and the target object to be inspected.
The pose control instruction is used for controlling the mechanical arm system to operate cooperatively with the mobile platform, so that the depth camera system can acquire image data of all places or equipment in the electric machine room in the moving process of the mobile platform.
The main working time of the initialization stage trains the robot to be familiar with the approximate space environment of the electric machine room, and definitely performs work tasks when different positions in the machine room, wherein a remote controller connected with a mobile platform is used for manually controlling the walking process of the robot at the stage.
2. Normalized inspection stage
S4: after the inspection task is triggered periodically, the mobile platform of the electric machine room inspection robot moves in the circuit machine room according to the initialized inspection path.
S5: in the running process of the mobile platform, a space virtualization module in the upper computer carries out space virtualization modeling according to data acquired by the depth camera and the laser radar in real time, and a navigation module optimizes an initialization inspection path according to barrier information in the virtualization model; and then realize the self-adaptation navigation and the obstacle avoidance of electric power computer lab inspection robot.
S6: and the upper computer acquires or stores the image or video data acquired by the depth camera in real time, and identifies equipment or devices at the key nodes through the target identification model.
S7: and the upper computer performs feature comparison or character recognition on the identified target object, so as to determine the real-time running state of each key device or apparatus.
The inspection stage mainly comprises the steps of automatically walking and obstacle avoidance by a robot, and acquiring operation state data of key equipment in the movement process.
3. Event handling phase
When the operation state of any one of the devices or apparatuses is abnormal or the position of the operation object of the task issued by the electric power machine room control center is reached in the step S7, the following active handling operation is executed:
S8: position information of the operation object is calculated from the target positioning model.
S9: and the upper computer controls the mobile platform to approach the operation object according to the change of the depth information of the target object in the depth image.
S10: after the latest operation position is reached, the upper computer calculates the detailed position of the operation object through the target positioning module. And then solving the optimal motion trail of the end execution mechanism through a mechanical arm trail planning module by combining the state data of the mechanical arm uploaded by the upper computer, and finally generating a motion information queue corresponding to the mechanical arm system and sending the motion information queue to the lower computer.
S11: the lower computer controls the mechanical arm to move according to the motion information queue, and the tail end executing mechanism executes corresponding actions, so that the treatment of the abnormal event or the implementation of the designated task is completed.
S12: after the inspection of all key nodes in the initialized inspection path is completed, the current inspection task is finished, and the upper computer generates a work log.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (10)

1. The integrated monitoring inspection robot for the electric power machine room is characterized by being used for carrying out normalized inspection on the electric power machine room and timely disposing a specific abnormal state; the electric power machine room inspection robot includes:
the mobile platform is used as a running mechanism of the electric machine room inspection robot; the mobile platform comprises a positioning module and is provided with at least one laser radar; the positioning module is used for collecting geographic coordinates of the mobile platform in the electric machine room; the laser radar is used for detecting obstacles in the movement direction;
the mechanical arm system comprises a lower computer, a mechanical arm and an end execution mechanism; the lower computer is used for controlling the pose and the motion state of the mechanical arm, and is also used for controlling the action of the tail end executing mechanism; the fixed end of the mechanical arm is loaded on the mobile platform, and the free end of the mechanical arm is used for installing an end actuating mechanism; the end execution mechanism is used for completing corresponding control tasks by executing specific actions;
the depth camera system is arranged at the free end of the mechanical arm, and any motion trail of the depth camera system and the tail end executing mechanism is not interfered; the view finding range of the depth camera comprises an end actuating mechanism and a corresponding control object; and
The upper computer is electrically connected with the mobile platform, the lower computer of the mechanical arm system and the depth camera system; the upper computer is respectively provided with: a target recognition model and a target positioning model based on the YOLO V5 design; a space virtualization module realized based on SLAM algorithm; a Navigation module based on Navigation function package; the mechanical arm track planning module is realized based on Moveit and a kinematic plug-in IKFAST; the upper computer is used for: (1) Generating a virtualization model representing the spatial layout of the machine room and the internal equipment thereof through a spatial virtualization module; (2) According to the spatial position relation between the depth camera corresponding to each track point in the inspection track and the object to be inspected, generating a pose control instruction of the corresponding mechanical arm at each track point in advance through the mechanical arm track planning module; (3) Combining data acquired by a laser radar and a depth camera in real time, generating an optimized motion trail of a mobile platform through the navigation module in a patrol stage, and generating a series of corresponding motion control instructions; (4) Performing target recognition on the depth image acquired by the depth camera in real time through a target recognition model, and then calculating the position of the target through a target positioning model after the target is recognized; (5) When the manipulation task is executed, the position information of the target and the state data of the mechanical arm system are input into the mechanical arm track planning module, the optimal motion track of the end actuating mechanism is solved, and a motion information queue corresponding to the mechanical arm system is generated.
2. The supervisory-integrated power machine room inspection robot of claim 1, wherein: the mobile platform is AGV equipment, and a wheeled robot chassis, a crawler robot chassis or any other bionic walking robot is selected to be adopted;
the mobile platform further comprises a remote control device, the remote control device is used for manually controlling the running state of the mobile platform, and the priority of a motion control instruction issued by the remote control device is higher than that of the upper computer.
3. The supervisory-integrated power machine room inspection robot of claim 1, wherein: the number of the laser radars in the mobile platform is multiple, and the laser radars are distributed at the circumferential positions of the mobile platform; the positioning module selects products based on GPS positioning, base station positioning or base station and wifi hybrid positioning technology.
4. The supervisory-integrated power machine room inspection robot of claim 1, wherein: the mechanical arm adopts a mechanical arm with six degrees of freedom; the tail end executing mechanism of the mechanical arm system adopts a mechanical clamping jaw or a bionic multi-finger mechanical arm; the surface of the mechanical clamping jaw or the mechanical arm is subjected to insulation and wear resistance enhancement treatment.
5. The supervisory-integrated power machine room inspection robot of claim 1, wherein: the upper computer, the mobile platform, the mechanical arm system and the depth camera system complete bidirectional transmission of instructions and data by adopting a Socket communication mode based on Ethernet or wireless.
6. The supervisory-integrated power machine room inspection robot of claim 1, wherein: the target recognition model is obtained by training a YOLO V5 basic model, and the input of the target recognition model is an original RGB-D image acquired by a depth camera; the output of the target recognition model is an RGB-D image containing a selection frame corresponding to a recognition target;
after the target recognition module outputs the sample image containing the target selection frame, the target positioning model calculates the spatial position of the target object through the following steps:
(1) Calculating a first coordinate of the target object relative to the center of the camera lens according to the pixel position of the identified selection frame of the target object in the RGB-D image and the depth information of the pixel area;
(2) According to the installation position of the depth camera on the mechanical arm system, converting a first coordinate of a target object into a second coordinate, wherein the second coordinate takes the free end of the mechanical arm as an origin;
(3) And converting the second coordinate into an absolute coordinate under a world coordinate system according to the real-time position of the electric machine room inspection robot in the electric machine room to obtain the actual space position of the target object.
7. The supervisory-integrated power machine room inspection robot of claim 6, wherein: the upper computer also comprises a characteristic matching model based on a graph convolution neural network and an OCR character recognition model; the characteristic matching model is used for carrying out characteristic matching on the identified specific target object and a typical state diagram of the target object in the database, so as to determine the working state of the specific target object; the OCR character recognition model is used for firstly carrying out image cutting on the local area of the recognized specific target, and then recognizing characters or symbols in the cut image; and further determining the text information in the target object.
8. The supervisory-integrated power machine room inspection robot of claim 1, wherein: in the process of optimizing the motion trail, the navigation module simultaneously adopts fusion characteristics in data acquired by a laser radar and a depth camera as reference information, and realizes accurate identification and modeling of the obstacle by utilizing a SLAM algorithm.
9. The supervisory-integrated power machine room inspection robot of claim 1, wherein: the upper computer also comprises a storage module, and the data stored by the storage module comprises: various original data collected by the electric machine room inspection robot in the process of executing the inspection task are analyzed and processed to obtain inspection results according to the various original data; and event information generated when various operation tasks are executed, and log files recorded after the inspection tasks are completed.
10. An unmanned inspection method for an electric machine room is characterized by comprising the following steps of: the monitoring integrated power machine room inspection robot as claimed in any one of claims 1-9 is used for automatically inspecting the power machine room in a normalized manner and actively handling certain specific abnormal events or fault states; the unmanned inspection method comprises the following steps:
1. initialization phase
S1: under the initialized state, a motion control instruction is issued to the inspection robot of the electric machine room, and the channel inside the whole electric machine room is traversed;
s2: the navigation module in the upper computer generates an initialization patrol path according to a series of coordinate information recorded by the positioning module in the mobile platform;
The initialization patrol path further comprises marks corresponding to key equipment or devices to be inspected;
s3: the mechanical arm track planning module pre-generates a pose control instruction of the mechanical arm corresponding to each track point in the initial inspection track according to the spatial position relation between the depth camera corresponding to each track point in the inspection track and the target object to be inspected;
the pose control instruction is used for controlling the mechanical arm system to operate cooperatively with the mobile platform, so that the depth camera system can acquire image data of all places or equipment in the electric power machine room in the moving process of the mobile platform;
2. normalized inspection stage
S4: after the inspection task is triggered according to the period, the mobile platform of the electric machine room inspection robot moves in the circuit machine room according to the initialized inspection path;
s5: in the running process of the mobile platform, a space virtualization module in the upper computer carries out space virtualization modeling according to data acquired by the depth camera and the laser radar in real time, and a navigation module optimizes an initialization inspection path according to barrier information in the virtualization model; thereby realizing the self-adaptive navigation and obstacle avoidance of the inspection robot of the electric power machine room;
S6: the upper computer acquires or stores the image or video data acquired by the depth camera in real time, and identifies equipment or devices at the key nodes through the target identification model;
s7: the upper computer performs feature comparison or character recognition on the recognized target object, so as to determine the real-time running state of each key device or apparatus;
3. event handling phase
When the operation state of any one of the devices or apparatuses is abnormal or the position of the operation object of the task issued by the electric power machine room control center is reached in the step S7, the following active handling operation is executed:
s8: calculating the position information of the operation object by the target positioning model;
s9: the upper computer controls the mobile platform to approach the operation object according to the change of the depth information of the target object in the depth image;
s10: after the latest operation position is reached, the upper computer calculates the detailed position of the operation object through the target positioning module, then the state data of the mechanical arm uploaded by the upper computer is combined, the optimal motion track of the end execution mechanism is solved through the mechanical arm track planning module, and finally a motion information queue corresponding to the mechanical arm system is generated and sent to the lower computer;
s11: the lower computer controls the mechanical arm to move according to the motion information queue, and the tail end executing mechanism executes corresponding actions, so that the treatment of an abnormal event or the implementation of a designated task is completed;
S12: after the inspection of all key nodes in the initialized inspection path is completed, the current inspection task is finished, and the upper computer generates a work log.
CN202310246774.1A 2023-03-10 2023-03-10 Monitoring integrated power machine room inspection robot and unmanned inspection method Pending CN116494201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310246774.1A CN116494201A (en) 2023-03-10 2023-03-10 Monitoring integrated power machine room inspection robot and unmanned inspection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310246774.1A CN116494201A (en) 2023-03-10 2023-03-10 Monitoring integrated power machine room inspection robot and unmanned inspection method

Publications (1)

Publication Number Publication Date
CN116494201A true CN116494201A (en) 2023-07-28

Family

ID=87319176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310246774.1A Pending CN116494201A (en) 2023-03-10 2023-03-10 Monitoring integrated power machine room inspection robot and unmanned inspection method

Country Status (1)

Country Link
CN (1) CN116494201A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116976642A (en) * 2023-08-31 2023-10-31 宁夏绿昊光伏发电有限公司 Operation and maintenance management system and method for intelligent machine room for electric power communication
CN117428792A (en) * 2023-12-21 2024-01-23 商飞智能技术有限公司 Operating system and method for robot
CN117831147A (en) * 2024-03-04 2024-04-05 陕西泰沃云科技有限公司 Robot and camera combined inspection method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116976642A (en) * 2023-08-31 2023-10-31 宁夏绿昊光伏发电有限公司 Operation and maintenance management system and method for intelligent machine room for electric power communication
CN116976642B (en) * 2023-08-31 2024-04-02 宁夏绿昊光伏发电有限公司 Operation and maintenance management system and method for intelligent machine room for electric power communication
CN117428792A (en) * 2023-12-21 2024-01-23 商飞智能技术有限公司 Operating system and method for robot
CN117831147A (en) * 2024-03-04 2024-04-05 陕西泰沃云科技有限公司 Robot and camera combined inspection method and system
CN117831147B (en) * 2024-03-04 2024-05-03 陕西泰沃云科技有限公司 Robot and camera combined inspection method and system

Similar Documents

Publication Publication Date Title
CN111897332B (en) Semantic intelligent substation robot humanoid inspection operation method and system
CN116494201A (en) Monitoring integrated power machine room inspection robot and unmanned inspection method
CN111421539A (en) Industrial part intelligent identification and sorting system based on computer vision
Brook et al. Collaborative grasp planning with multiple object representations
CN106452903A (en) Cloud-aided intelligent warehouse management robot system and method
JP2016522089A (en) Controlled autonomous robot system for complex surface inspection and processing
Zhao et al. Autonomous live working robot navigation with real‐time detection and motion planning system on distribution line
CN111331607B (en) Automatic grabbing and stacking method and system based on mechanical arm
CN116755474A (en) Electric power line inspection method and system for unmanned aerial vehicle
Huang et al. A case study of cyber-physical system design: Autonomous pick-and-place robot
Asadi et al. Automated object manipulation using vision-based mobile robotic system for construction applications
CN115299245B (en) Control method and control system of intelligent fruit picking robot
Mohan et al. Design of robot monitoring system for aviation
CN114132745A (en) Automatic workpiece loading and unloading system and method based on AGV and machine vision
CN112207839A (en) Mobile household service robot and method
CN110656975B (en) Tunnel rescue system and method based on virtual reality and ACP parallel intelligence
CN116852352A (en) Positioning method for mechanical arm of electric secondary equipment based on ArUco code
Rezaei et al. A deep learning-based approach for vehicle motion prediction in autonomous driving
CN115661966A (en) Inspection system and method based on augmented reality
CN116047950A (en) Modularized system of soil site pollution mobile monitoring robot
CN109977884A (en) Target follower method and device
Jácome et al. A mini-sized agent testbed for applications in mobile robotics
Mahaadevan et al. AViTRoN: Advanced vision track routing and navigation for autonomous charging of electric vehicles
Zhang et al. [Retracted] Multifunctional Robot Grasping System Based on Deep Learning and Image Processing
Li Constructing the intelligent expressway traffic monitoring system using the internet of things and inspection robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination