CN111923042B - Virtualization processing method and system for cabinet grid and inspection robot - Google Patents

Virtualization processing method and system for cabinet grid and inspection robot Download PDF

Info

Publication number
CN111923042B
CN111923042B CN202010706984.0A CN202010706984A CN111923042B CN 111923042 B CN111923042 B CN 111923042B CN 202010706984 A CN202010706984 A CN 202010706984A CN 111923042 B CN111923042 B CN 111923042B
Authority
CN
China
Prior art keywords
camera
offset
target
image information
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010706984.0A
Other languages
Chinese (zh)
Other versions
CN111923042A (en
Inventor
李超
敖奇
王福闯
张奎刚
刘甲宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CRSC Research and Design Institute Group Co Ltd
Original Assignee
CRSC Research and Design Institute Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CRSC Research and Design Institute Group Co Ltd filed Critical CRSC Research and Design Institute Group Co Ltd
Priority to CN202010706984.0A priority Critical patent/CN111923042B/en
Publication of CN111923042A publication Critical patent/CN111923042A/en
Application granted granted Critical
Publication of CN111923042B publication Critical patent/CN111923042B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention belongs to the technical field of rail transit, and discloses a method and a system for blurring a cabinet grid and an inspection robot, wherein the blurring method comprises the following steps: step S1: controlling the mechanical arm to drive the first camera to move to a first position according to the work plan; step S2: acquiring and obtaining first image information of a target in the cabinet through a first camera, identifying the target in the cabinet according to the first image information, and obtaining a first offset between the target and the first camera; step S3: obtaining a fourth offset between the second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, and controlling the mechanical arm to drive the second camera to move to a second position according to the fourth offset; step S4: acquiring and obtaining second image information of the target through a second camera through the grid of the cabinet; therefore, the current state of the target can be identified without opening the cabinet door in the inspection process.

Description

Virtualization processing method and system for cabinet grid and inspection robot
Technical Field
The invention belongs to the technical field of rail transit, and particularly relates to a virtualization processing method and system for a cabinet grid and an inspection robot.
Background
With the rapid development of the Chinese railway construction, the high-speed and high-density running state of the railway train puts more strict requirements on the safety and operation and maintenance management of the railway electric service equipment and system. The high-speed railway section signal relay station is mostly an unattended station, the inconvenience of traffic and the inconvenience and traffic safety hidden danger brought by the night patrol operation to the maintenance and emergency disposal of signal equipment cause that electric service patrol personnel can not completely grasp the operation working condition of the high-speed railway section unattended signal relay station equipment in real time, and the dead zone is likely to appear in the application state of fixed-point monitoring signal equipment.
The intelligent patrol system of the national railway unattended signal relay station mainly completes the automatic patrol monitoring function of a railway electric appliance machine room, monitors and alarms the technical indexes of signal equipment, key equipment and instruments and meters in real time, greatly improves the monitoring and operation and maintenance level of the high-speed railway signal equipment, enhances the security control level of key places of the high-speed railway, strives to compress the equipment fault processing delay time, and protects and drives the high-speed railway for safe operation.
However, it is found in practice that the existing inspection robot does not have the grid blurring capability, can only inspect a cabinet without grid shielding, and can only display pictures taken by a camera, and the type, position and state of an indicator lamp of a board card need to be manually identified. Therefore, the existing inspection robot cannot effectively inspect the indoor cabinet with the grid shape, including the type and the position of the board card and the state of the indicator light, and is lack of an alarm mechanism for fault abnormity. Under the condition, most of the measures which can be taken at present are to directly remove the cabinet door, but the method has the risk of misoperation of unauthorized people.
In addition, the existing inspection robot is only controlled by an industrial personal computer, and the CPU processing capacity of the industrial personal computer is difficult to adapt to the existing complex deep learning algorithm, so that the image processing capacity of the inspection robot is limited.
Therefore, it is urgently needed to develop a method and a system for blurring a grid of a cabinet and an inspection robot, which overcome the above-mentioned defects.
Disclosure of Invention
In view of the above problems, the present invention provides a method for blurring a grid of a rack, including:
step S1: controlling the mechanical arm to drive the first camera to move to a first position according to the work plan;
step S2: acquiring and obtaining first image information of a target in a cabinet through the first camera, identifying the target in the cabinet according to the first image information, and obtaining a first offset between the target and the first camera;
step S3: obtaining a fourth offset between a second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, and controlling the mechanical arm to drive the second camera to move to a second position according to the fourth offset;
step S4: and acquiring and obtaining second image information of the target through the second camera through the grid of the cabinet.
The blurring processing method further includes step S5:
and identifying the current state of the target according to the second image information.
In the blurring processing method, in step S1, the method includes:
step S11: mounting the first camera and the second camera on the robotic arm;
step S12: a calibration module of the main control unit obtains the second offset and the third offset through a camera calibration technology;
step S13: receiving and acquiring the position information of the first position according to the working plan through a processing module of the main control unit;
step S14: and the control module of the main control unit controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
The blurring processing method described above, wherein the step S14 further includes: when the mechanical arm moves, the control module controls the first camera to collect video streams, the first camera outputs the video streams to the processing module, and the processing module obtains and displays real-time position information of the mechanical arm according to the video streams.
In the blurring processing method, step S2 includes:
Step S21: the control module outputs a first acquisition instruction to the first camera;
step S22: the first camera acquires and obtains the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit;
step S23: the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset;
step S24: the GPU unit outputs the first image information marked with the target, the target information and the first offset to the processing module.
In the blurring processing method, step S3 includes:
step S31: the processing module obtains the fourth offset according to the first offset, the second offset and the third offset;
step S32: the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module;
step S33: the control module controls the mechanical arm to drive the second camera to move to the second position according to the position information of the second position.
The blurring processing method described above, wherein the step S4 further includes: and after the second camera reaches the second position, the control module controls the second camera to acquire and obtain the second image information, and the second camera outputs the second image information to the processing module.
The blurring processing method described above, wherein the step S5 further includes: and the processing module outputs the second image information to a background system, and the background system identifies the current state of the target in the second image information through a deep learning algorithm and a traditional image algorithm.
The invention also provides a system for blurring the grids of the cabinet, which comprises:
the first camera is arranged on the mechanical arm;
the main control unit is electrically connected with the mechanical arm and the first camera, and controls the first camera to acquire and obtain first image information of a target in the cabinet after the mechanical arm is controlled by the main control unit according to a work plan to drive the first camera to move to a first position;
the GPU unit is used for identifying a target in the cabinet according to the first image information and acquiring a first offset between the target and the first camera;
the second camera is arranged on the mechanical arm, the main control unit obtains a fourth offset between the second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, the main control unit controls the mechanical arm to drive the second camera to move to a second position according to the fourth offset, and the main control unit controls the second camera to acquire second image information of the target through the grid of the cabinet.
The blurring processing system further includes a background system, and identifies the current state of the target according to the second image information.
In the above virtualization processing system, the main control unit includes:
the calibration module calibrates the first camera and the second camera through a camera calibration technology to obtain the second offset and the third offset;
the processing module is used for receiving and acquiring the position information of the first position according to the work plan;
the control module receives the position information of the first position output by the processing module, and controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
In the blurring processing system, when the mechanical arm moves, the control module controls the first camera to collect a video stream, the first camera outputs the video stream to the processing module, and the processing module obtains real-time position information of the mechanical arm according to the video stream.
In the blurring processing system, after the first camera reaches the first position, the control module outputs a first acquisition instruction to the first camera, the first camera acquires the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit, the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset, and the GPU unit outputs the first image information marked with the target, the target information, and the first offset to the processing module.
In the blurring processing system, the processing module obtains the fourth offset according to the first offset, the second offset, and the third offset, the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module, and the control module controls the mechanical arm to drive the second camera to move to the second position according to the position information of the second position.
In the blurring processing system, after the second camera reaches the second position, the control module outputs a second acquisition instruction to the second camera, the second camera acquires and obtains the second image information, and the second camera outputs the second image information to the processing module.
In the blurring processing system, the processing module outputs the second image information to a background system, and the background system identifies the current state of the target in the second image information through a deep learning algorithm and a conventional image algorithm.
The invention also provides a patrol robot, which comprises:
a mechanical arm;
the virtualization processing system of any one of the above, the virtualization processing system connect in the arm, the robot that patrols and examines passes through the arm reaches the current state that the virtualization processing system gathered and discerned the target that is sheltered from by the rack grid.
Aiming at the prior art, the invention has the following effects:
the GPU unit is arranged in the inspection robot, so that the inspection robot can run a deep learning algorithm with larger network and higher precision, and the real-time processing of video streams with high resolution and high frame number is realized; meanwhile, the invention utilizes the depth camera, and the deep learning algorithm is matched with the mechanical arm, so that the grid blurring function is realized, and the current state of the target can be identified without opening the cabinet door in the inspection process.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a blurring processing method according to the present invention;
FIG. 2 is a flowchart illustrating the substeps of step S1 in FIG. 1;
FIG. 3 is a flowchart illustrating the substeps of step S2 in FIG. 1;
FIG. 4 is a flowchart illustrating the substeps of step S3 in FIG. 1;
FIG. 5 is a schematic diagram of a virtualization processing system according to the present invention;
FIG. 6 is a block diagram of a cabinet;
fig. 7 is an identification diagram.
Wherein the reference numerals are:
a first camera: 11
The main control unit: 12
GPU unit: 13
A second camera: 14
A background system: 15
A calibration module: 121
A processing module: 122
A control module: 123
Mechanical arm: 21
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As used herein, the terms "comprising," "including," "having," "containing," and the like are open-ended terms that mean including, but not limited to.
The exemplary embodiments of the present invention and the description thereof are provided to explain the present invention and not to limit the present invention. Additionally, the same or similar numbered elements/components used in the drawings and the embodiments are used to represent the same or similar parts.
The invention provides a method for accurately identifying the type and the position of a board card behind a grid of a cabinet by using a deep learning method, and a clear picture of a board card indicator lamp behind the grid blurring is obtained by using an industrial camera, so that inspection without disassembling the cabinet door is realized, and the state of the board card indicator lamp is identified.
Referring to fig. 1, fig. 1 is a flow chart of a blurring processing method according to the present invention. As shown in fig. 1, the virtualization processing method of the cabinet grid of the present invention includes:
step S1: and controlling the mechanical arm to drive the first camera to move to the first position according to the work plan.
In this embodiment, the robot arm may be a multi-axis robot arm, or may be an xyz platform or other similar product.
Referring to fig. 2, fig. 2 is a flowchart illustrating a sub-step of step S1 in fig. 1. As shown in fig. 2, the step S1 includes:
step S11: mounting the first camera and the second camera on the robotic arm.
In the present embodiment, it is preferable that the first camera is a depth camera and the second camera is an industrial camera, but the present invention is not limited thereto. Specifically, a depth camera and an industrial camera are fixed at the front end of a mechanical arm, and the relative positions of the depth camera and the industrial camera and the mechanical arm are fixed.
Step S12: and the calibration module of the main control unit obtains the second offset and the third offset by a camera calibration technology.
Wherein, in this embodiment, the main control unit can be for setting up alone, or the main control unit also can be for patrolling and examining the industrial computer of robot.
Specifically, after the camera is installed, the calibration module calibrates the camera through a camera calibration technology to obtain a second offset between the depth camera and the mechanical arm and a third offset between the industrial camera and the mechanical arm.
For example, a calibration board is prepared first, and the calibration board and the robot arm base are kept unchanged during the camera calibration process. The mechanical arm is adjusted, so that the camera shoots the calibration plate at different positions, and the whole calibration plate is ensured to be in the shot picture.
Because the calibration plate and the mechanical arm base in the multi-group data are kept unchanged, and meanwhile, the relative position of the camera and the mechanical arm terminal is unchanged, the T is set. There are two sets of data
Figure BDA0002595171560000081
Further transformation can obtain
Figure BDA0002595171560000082
The relative position, i.e. the offset, of the camera and the mechanical arm terminal can be obtained by using a calibration algorithm such as Tsai-Lenz.
Step S13: and receiving and obtaining the position information of the first position according to the working plan through a processing module of the main control unit.
Specifically, after receiving a work plan, for example, a patrol work plan, the processing module analyzes the work plan to obtain the position information of the cabinet to be detected and the position information of the first position acquired by the first camera.
Step S14: and the control module of the main control unit controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
Specifically, referring to fig. 6, fig. 6 is a physical diagram of the cabinet. The control module controls the robot body to move to a position at a certain distance from the cabinet and keep the robot body still according to the position information of the cabinet to be detected, and the control module controls the movable mechanical arm according to the position information of the first position, so that the first camera is perpendicular to the cabinet and fixed at a certain position at a certain distance from the cabinet, namely fixed at the first position.
Wherein, the step S14 further includes: when the mechanical arm moves, the control module controls the first camera to collect video streams, the first camera outputs the video streams to the processing module, and the processing module obtains and displays real-time position information of the mechanical arm according to the video streams. Specifically speaking, in the process of patrolling and examining the robot, the depth camera keeps open, and video stream is constantly gathered to can acquire the positional information of arm in real time, prevent that it from touching other equipment in computer lab.
Step S2: the method comprises the steps of acquiring and obtaining first image information of a target in a cabinet through a first camera, identifying the target in the cabinet according to the first image information, and obtaining a first offset between the target and the first camera, wherein the first offset comprises x offset, y offset and z offset.
In this embodiment, the target is a board card in the cabinet, and in other embodiments, the target may also be a switch.
Referring to fig. 3, fig. 3 is a flowchart illustrating a substep of step S2 in fig. 1. As shown in fig. 3, the step S2 includes:
step S21: the control module outputs a first acquisition instruction to the first camera;
step S22: the first camera acquires and obtains the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit;
step S23: the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset;
step S24: the GPU unit outputs the first image information marked with the target, the target information and the first offset to the processing module.
Specifically, please refer to fig. 7, and fig. 7 is an identification diagram. After the first camera moves to the first position, the control module outputs a first acquisition instruction to the first camera, the first camera acquires and obtains first image information according to the first acquisition instruction, the first camera is connected with the GPU module, the first camera sends the first image information to the GPU unit after the first image information is obtained, the GPU unit frames a target detected in the first image information in a rectangular frame or a polygonal frame through a trained deep learning algorithm (as shown in figure 7), the first image information is marked with target information and a first offset, the target information comprises a name of the target, and the first offset is relative position information of the target. The GPU unit outputs the first image information, the target information and the first offset of the marked target to the processing module in a wired or wireless transmission mode.
The deep learning algorithm of the invention marks different types of board cards by using frames with different colors, and simultaneously gives the names and the position information of the board cards, thereby realizing the simultaneous identification of one board card or a plurality of board cards, and when a plurality of board cards are identified, the plurality of board cards can be of the same type or different types.
The deep learning algorithm is trained through a large amount of data, and therefore pictures of a plurality of manufacturers and equipment, grids of different cabinets and board cards are collected. The data collection amount is over ten thousand. And (3) calibrating the original data, training on a GPU server in a laboratory, finally obtaining a trained deep learning algorithm model, and deploying the trained deep learning algorithm model to a GPU module of the inspection robot. The training process can be roughly divided into four stages of data acquisition, data calibration, algorithm design, training and verification. Where the data is labeled as names of different boards and switches. The data set is divided into a training set, a validation set and a test set. The training set is introduced into the deep learning network, parameters are continuously updated, designed loss functions are converged, overfitting is avoided, the verification set is used for detecting the training state in the training process, the trained deep learning network is tested by the test set, and retraining is needed when the effect is poor.
Step S3: and obtaining a fourth offset between the second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, and controlling the mechanical arm to drive the second camera to move to a second position according to the fourth offset.
Referring to fig. 4, fig. 4 is a flowchart illustrating steps of step S3 in fig. 1. As shown in fig. 4, the step S3 includes:
step S31: the processing module obtains the fourth offset according to the first offset, the second offset and the third offset;
step S32: the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module;
step S33: the control module controls the mechanical arm to drive the second camera to move to the second position according to the position information of the second position.
Step S4: and acquiring and obtaining second image information of the target through the second camera through the grid of the cabinet.
Wherein, the step S4 further includes: after the second camera reaches the second position, the control module controls the second camera to acquire and obtain the second image information, and the second camera outputs the second image information to the processing module.
Step S5: and identifying the current state of the target according to the second image information.
Wherein, the step S5 further includes: and the processing module outputs the second image information to a background system, and the background system identifies the current state of the target in the second image information through a deep learning algorithm and a traditional image algorithm.
Specifically, the robot arm controls the industrial camera to move to a specified position according to the coordinate value given by the first camera. And after the industrial camera reaches the designated position, shooting by the industrial camera. And the photographed second image information is directly transmitted to the background system without splicing processing. And the acquired second image information is sent to a background system by the processing module, and the background system identifies the state of the board card indicator lamp after the grid is virtualized by utilizing a deep learning algorithm and a traditional image algorithm.
The background system firstly carries out a splicing process, wherein image splicing mainly comprises feature point extraction and matching, image registration and image fusion, the feature point extraction is such as SIFT algorithm and SRUF algorithm, the image registration is such as RANSAC algorithm, and the image fusion is such as weighting smoothing algorithm. We have tested a number of algorithms and selected the best one. The identification of the indicator light is also realized by a trained deep learning algorithm, and in addition, the state of the indicator light is realized by a traditional image algorithm, such as HSV and the like, but the invention is not limited to the method.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a virtualization processing system according to the present invention. As shown in fig. 5, the blurring processing system of the present invention includes:
a first camera 11 mounted on the robot arm 21;
the main control unit 12 is electrically connected to the mechanical arm 21 and the first camera 11, and after the main control unit 12 controls the mechanical arm 21 to drive the first camera 11 to move to a first position according to a work plan, the main control unit 12 controls the first camera 11 to acquire and obtain first image information of a target in the cabinet;
the GPU unit 13 is used for identifying a target in the cabinet according to the first image information and obtaining a first offset between the target and the first camera 11;
the second camera 14 is mounted on the mechanical arm 21, the main control unit 12 obtains a fourth offset between the second camera 14 and the target according to the first offset, the second offset between the first camera 11 and the mechanical arm 21, and the third offset between the second camera 14 and the mechanical arm 21, the main control unit 12 controls the mechanical arm 21 to drive the second camera 13 to move to a second position according to the fourth offset, and the main control unit 12 controls the second camera 14 to acquire second image information of the target through a grid of the cabinet.
In this embodiment, the GPU unit 13 may be independently arranged, or may be integrated on the main control unit 12.
Further, a background system 15 is included for identifying the current state of the target according to the second image information.
Wherein, the main control unit 12 includes:
a calibration module 121, configured to calibrate the first camera and the second camera by using a camera calibration technique to obtain the second offset and the third offset;
the processing module 122 receives and obtains the position information of the first position according to the work plan;
the control module 123 receives the position information of the first position output by the processing module, and controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position.
When the mechanical arm 21 moves, the control module 123 controls the first camera 11 to capture a video stream, the first camera 11 outputs the video stream to the processing module 122, and the processing module 122 obtains real-time position information of the mechanical arm 21 according to the video stream.
After the first camera 11 reaches the first position, the control module 123 outputs a first acquisition instruction to the first camera 11, the first camera 11 acquires the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit 13, the GPU unit 13 marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset, and the GPU unit 13 outputs the first image information, the target information, and the first offset marked with the target to the processing module 122.
The processing module 122 obtains the fourth offset according to the first offset, the second offset, and the third offset, the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module 123, and the control module 123 controls the mechanical arm 21 to drive the second camera 14 to move to the second position according to the position information of the second position.
After the second camera 14 reaches the second position, the control module 123 outputs a second acquisition instruction to the second camera 14, the second camera 14 acquires and obtains the second image information, the second camera 14 outputs the second image information to the processing module 122, the processing module 122 outputs the second image information to the background system 15, and the background system 15 identifies the current state of the target in the second image information through a deep learning algorithm and a conventional image algorithm.
The present invention also provides a patrol robot, comprising: arm 21 and the virtual processing system among the aforesaid, virtual processing system connect in the arm, it passes through to patrol and examine the robot the arm reaches virtual processing system gathers and discerns the current state of the target that is sheltered from by the rack grid.
In conclusion, the GPU unit is placed in the inspection robot to process the image and video information, so that the image processing capability of the inspection robot is greatly improved, the inspection robot can run a deep learning algorithm with a larger network and higher precision, the real-time processing of video streams with high resolution and high frame number is realized, and the problem of complex image video can be solved; meanwhile, identifying different board cards of different cabinets in the railway signal room based on a deep learning algorithm, wherein the board cards comprise board card types and positions; in addition, due to the combination of the depth camera and the mechanical arm, the virtualization treatment of the cabinet grid by the inspection robot is realized, and the state of the panel card indicator lamp in the cabinet can be recognized without opening the door.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A cabinet grid blurring processing method is characterized by comprising the following steps:
Step S1: controlling the mechanical arm to drive the first camera to move to a first position according to the work plan;
step S2: acquiring and obtaining first image information of a target in a cabinet through the first camera, identifying the target in the cabinet according to the first image information, and obtaining a first offset between the target and the first camera;
step S3: obtaining a fourth offset between a second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, and controlling the mechanical arm to drive the second camera to move to a second position according to the fourth offset;
step S4: acquiring and obtaining second image information of the target through the grid of the cabinet by the second camera;
further comprising step S5: identifying the current state of the target according to the second image information;
the step S1 includes:
step S11: mounting the first camera and the second camera on the robotic arm;
step S12: a calibration module of the main control unit obtains the second offset and the third offset through a camera calibration technology; the camera calibration technology keeps the camera and the mechanical arm base unchanged through the calibration plate, meanwhile, the relative position of the camera and the mechanical arm terminal is unchanged, the T is set, and two groups of data exist
Figure FDA0003509896660000011
Further transform to obtain
Figure FDA0003509896660000012
The relative position, namely the offset, of the camera and the mechanical arm terminal can be obtained by using a calibration algorithm such as Tsai-Lenz;
step S13: receiving and acquiring the position information of the first position according to the working plan through a processing module of the main control unit;
step S14: the control module of the main control unit controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position; when the mechanical arm moves, the control module controls the first camera to collect video streams, the first camera outputs the video streams to the processing module, and the processing module obtains and displays real-time position information of the mechanical arm according to the video streams;
the step S2 includes:
step S21: the control module outputs a first acquisition instruction to the first camera;
step S22: the first camera acquires and obtains the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit;
step S23: the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset; the GPU frames a target detected in the first image information in a rectangular frame or polygonal frame form through a trained deep learning algorithm, and marks the target information and a first offset on the first image information, wherein the target information comprises the name of the target, and the first offset is the relative position information of the target;
Step S24: the GPU unit outputs the first image information marked with the target, the target information and the first offset to the processing module.
2. The blurring processing method as claimed in claim 1, wherein said step S3 includes:
step S31: the processing module obtains the fourth offset according to the first offset, the second offset and the third offset;
step S32: the processing module obtains the position information of the second position according to the fourth offset and outputs the position information to the control module;
step S33: the control module controls the mechanical arm to drive the second camera to move to the second position according to the position information of the second position.
3. A blurring processing method as claimed in claim 2, wherein said step S4 further comprises: and after the second camera reaches the second position, the control module controls the second camera to acquire and obtain the second image information, and the second camera outputs the second image information to the processing module.
4. A blurring processing method as claimed in claim 3, wherein said step S5 further comprises: and the processing module outputs the second image information to a background system, and the background system identifies the current state of the target in the second image information through a deep learning algorithm and a traditional image algorithm.
5. A system for blurring a grid of a cabinet, comprising:
the first camera is arranged on the mechanical arm;
the main control unit is electrically connected with the mechanical arm and the first camera, and controls the first camera to acquire and obtain first image information of a target in the cabinet after the mechanical arm is controlled by the main control unit according to a work plan to drive the first camera to move to a first position;
the GPU unit is used for identifying a target in the cabinet according to the first image information and acquiring a first offset between the target and the first camera;
the second camera is arranged on the mechanical arm, the main control unit obtains a fourth offset between the second camera and the target according to the first offset, the second offset between the first camera and the mechanical arm and the third offset between the second camera and the mechanical arm, the main control unit controls the mechanical arm to drive the second camera to move to a second position according to the fourth offset, and the main control unit controls the second camera to acquire second image information of the target through a grid of the cabinet;
The background system is used for identifying the current state of the target according to the second image information;
the main control unit includes:
the calibration module calibrates the first camera and the second camera through a camera calibration technology to obtain the second offset and the third offset; the camera calibration technology keeps the camera and the mechanical arm base unchanged through the calibration plate, meanwhile, the relative position of the camera and the mechanical arm terminal is unchanged, the T is set, and two groups of data exist
Figure FDA0003509896660000031
Further transformation can obtain
Figure FDA0003509896660000032
The relative position, namely the offset, of the camera and the mechanical arm terminal can be obtained by using a calibration algorithm such as Tsai-Lenz;
the processing module is used for receiving and acquiring the position information of the first position according to the work plan;
the control module receives the position information of the first position output by the processing module, and controls the mechanical arm to drive the first camera to move to the first position according to the position information of the first position; when the mechanical arm moves, the control module controls the first camera to collect video streams, the first camera outputs the video streams to the processing module, and the processing module obtains real-time position information of the mechanical arm according to the video streams;
After the first camera reaches the first position, the control module outputs a first acquisition instruction to the first camera, the first camera acquires the first image information according to the first acquisition instruction and outputs the first image information to the GPU unit, the GPU unit marks the target in the first image information through a deep learning algorithm and obtains target information and the first offset, and the GPU unit outputs the first image information marked with the target, the target information and the first offset to the processing module; the GPU frames the target detected in the first image information in a rectangular frame or polygonal frame form through a trained deep learning algorithm, and marks the target information and a first offset on the first image information, wherein the target information comprises the name of the target, and the first offset is also the relative position information of the target.
6. The blurring processing system according to claim 5, wherein the processing module obtains the fourth offset according to the first offset, the second offset, and the third offset, the processing module obtains position information of the second position according to the fourth offset and outputs the position information to the control module, and the control module controls the robot arm to drive the second camera to move to the second position according to the position information of the second position.
7. The blurring processing system of claim 6, wherein after the second camera reaches the second position, the control module outputs a second capture instruction to the second camera, the second camera captures and obtains the second image information, and the second camera outputs the second image information to the processing module.
8. A blurring processing system according to claim 7, wherein the processing module outputs the second image information to a back-end system, the back-end system identifying a current state of the object in the second image information through a deep learning algorithm and a conventional image algorithm.
9. An inspection robot, comprising:
a mechanical arm;
the virtualization processing system of any one of the preceding claims 5-8, wherein the virtualization processing system is connected to the robotic arm, and the inspection robot collects and identifies a current state of an object occluded by the cabinet grid through the robotic arm and the virtualization processing system.
CN202010706984.0A 2020-07-21 2020-07-21 Virtualization processing method and system for cabinet grid and inspection robot Active CN111923042B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010706984.0A CN111923042B (en) 2020-07-21 2020-07-21 Virtualization processing method and system for cabinet grid and inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010706984.0A CN111923042B (en) 2020-07-21 2020-07-21 Virtualization processing method and system for cabinet grid and inspection robot

Publications (2)

Publication Number Publication Date
CN111923042A CN111923042A (en) 2020-11-13
CN111923042B true CN111923042B (en) 2022-05-24

Family

ID=73314353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010706984.0A Active CN111923042B (en) 2020-07-21 2020-07-21 Virtualization processing method and system for cabinet grid and inspection robot

Country Status (1)

Country Link
CN (1) CN111923042B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040431B (en) * 2021-10-08 2023-05-26 中国联合网络通信集团有限公司 Network testing method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN109635875A (en) * 2018-12-19 2019-04-16 浙江大学滨海产业技术研究院 A kind of end-to-end network interface detection method based on deep learning
CN110246175A (en) * 2019-05-24 2019-09-17 国网安徽省电力有限公司检修分公司 Intelligent Mobile Robot image detecting system and method for the panorama camera in conjunction with holder camera
CN110315500A (en) * 2019-07-01 2019-10-11 广州弘度信息科技有限公司 A kind of double mechanical arms crusing robot and its method accurately opened the door
CN110399831A (en) * 2019-07-25 2019-11-01 中国银联股份有限公司 A kind of method for inspecting and device
CN110490854A (en) * 2019-08-15 2019-11-22 中国工商银行股份有限公司 Obj State detection method, Obj State detection device and electronic equipment
CN110614638A (en) * 2019-09-19 2019-12-27 国网山东省电力公司电力科学研究院 Transformer substation inspection robot autonomous acquisition method and system
CN110648319A (en) * 2019-09-19 2020-01-03 国网山东省电力公司电力科学研究院 Equipment image acquisition and diagnosis system and method based on double cameras
CN111427320A (en) * 2020-04-03 2020-07-17 无锡超维智能科技有限公司 Intelligent industrial robot distributed unified scheduling management platform

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8340820B2 (en) * 2010-02-26 2012-12-25 Agilent Technologies, Inc. Robot arm and method of controlling robot arm to avoid collisions
CN103759716B (en) * 2014-01-14 2016-08-17 清华大学 The dynamic target position of mechanically-based arm end monocular vision and attitude measurement method
CN105631875A (en) * 2015-12-25 2016-06-01 广州视源电子科技股份有限公司 Method and system for determining mapping relations between camera coordinates and arm gripper coordinates
US11772270B2 (en) * 2016-02-09 2023-10-03 Cobalt Robotics Inc. Inventory management by mobile robot
CA2977077C (en) * 2017-06-16 2019-10-15 Robotiq Inc. Robotic arm camera system and method
JP7219906B2 (en) * 2018-07-19 2023-02-09 株式会社Icon Learning toy, mobile object for learning toy used for this, and portable information processing terminal for learning toy used for this
CN109785388B (en) * 2018-12-28 2023-04-18 东南大学 Short-distance accurate relative positioning method based on binocular camera
CN110497373B (en) * 2019-08-07 2022-05-27 大连理工大学 Joint calibration method between three-dimensional laser radar and mechanical arm of mobile robot
CN110909653B (en) * 2019-11-18 2022-03-15 南京七宝机器人技术有限公司 Method for automatically calibrating screen cabinet of distribution room by indoor robot
CN111145211B (en) * 2019-12-05 2023-06-30 大连民族大学 Method for acquiring pixel height of head of upright pedestrian of monocular camera

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN109635875A (en) * 2018-12-19 2019-04-16 浙江大学滨海产业技术研究院 A kind of end-to-end network interface detection method based on deep learning
CN110246175A (en) * 2019-05-24 2019-09-17 国网安徽省电力有限公司检修分公司 Intelligent Mobile Robot image detecting system and method for the panorama camera in conjunction with holder camera
CN110315500A (en) * 2019-07-01 2019-10-11 广州弘度信息科技有限公司 A kind of double mechanical arms crusing robot and its method accurately opened the door
CN110399831A (en) * 2019-07-25 2019-11-01 中国银联股份有限公司 A kind of method for inspecting and device
CN110490854A (en) * 2019-08-15 2019-11-22 中国工商银行股份有限公司 Obj State detection method, Obj State detection device and electronic equipment
CN110614638A (en) * 2019-09-19 2019-12-27 国网山东省电力公司电力科学研究院 Transformer substation inspection robot autonomous acquisition method and system
CN110648319A (en) * 2019-09-19 2020-01-03 国网山东省电力公司电力科学研究院 Equipment image acquisition and diagnosis system and method based on double cameras
CN111427320A (en) * 2020-04-03 2020-07-17 无锡超维智能科技有限公司 Intelligent industrial robot distributed unified scheduling management platform

Also Published As

Publication number Publication date
CN111923042A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN110826538A (en) Abnormal off-duty identification system for electric power business hall
CN109271872B (en) Device and method for judging on-off state and diagnosing fault of high-voltage isolating switch
CN113225387B (en) Visual monitoring method and system for machine room
WO2017057780A1 (en) Data collection device, method, and program for display panel or control panel
CN111539313A (en) Examination cheating behavior detection method and system
CN109544870B (en) Alarm judgment method for intelligent monitoring system and intelligent monitoring system
CN211720329U (en) Intelligent monitoring system for power distribution room
CN112437255A (en) Intelligent video monitoring system and method for nuclear power plant
CN111923042B (en) Virtualization processing method and system for cabinet grid and inspection robot
CN109308448A (en) A method of it prevents from becoming distribution maloperation using image processing techniques
CN113177614A (en) Image recognition system and method for power supply switch cabinet of urban rail transit
CN111951161A (en) Target identification method and system and inspection robot
CN112564291A (en) Power equipment pressing plate state monitoring system and monitoring method
CN110247328A (en) Position judging method based on image recognition in switchgear
CN113044694A (en) Construction site elevator people counting system and method based on deep neural network
CN112532927A (en) Intelligent safety management and control system for construction site
CN112542022A (en) Automatic inspection system for intelligent production
CN116311034A (en) Robot inspection system based on contrast detection
CN111917978B (en) Adjusting device and method of industrial camera and shooting device
CN112202247B (en) Isolating switch on-off monitoring system and method based on BP neural network
CN209929831U (en) Switch cabinet with image recognition position
CN114387542A (en) Video acquisition unit abnormity identification system based on portable ball arrangement and control
CN112706162A (en) Method for realizing internal patrol of secondary equipment screen cabinet through patrol robot
CN110969813A (en) Railway substation unattended monitoring method based on edge calculation
CN112085654A (en) Configurable simulation screen recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant