CN113946154B - Visual identification method and system for inspection robot - Google Patents

Visual identification method and system for inspection robot Download PDF

Info

Publication number
CN113946154B
CN113946154B CN202111558006.7A CN202111558006A CN113946154B CN 113946154 B CN113946154 B CN 113946154B CN 202111558006 A CN202111558006 A CN 202111558006A CN 113946154 B CN113946154 B CN 113946154B
Authority
CN
China
Prior art keywords
distance
target hardware
actual
navigation camera
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111558006.7A
Other languages
Chinese (zh)
Other versions
CN113946154A (en
Inventor
贾绍春
周伟亮
李方
付守海
薛家驹
邹霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Keystar Intelligence Robot Co ltd
Original Assignee
Guangdong Keystar Intelligence Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Keystar Intelligence Robot Co ltd filed Critical Guangdong Keystar Intelligence Robot Co ltd
Priority to CN202111558006.7A priority Critical patent/CN113946154B/en
Publication of CN113946154A publication Critical patent/CN113946154A/en
Application granted granted Critical
Publication of CN113946154B publication Critical patent/CN113946154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

A visual identification method of a line patrol robot comprises the following steps: step A: determining the advancing direction of the inspection robot, and establishing communication connection between the image recognition module and the motion control module; and B: acquiring a video frame of a navigation camera, and performing object recognition on the video frame through a navigation camera recognition submodule of an image recognition module, wherein the object recognition comprises judging whether an object is a hardware fitting and measuring and calculating the distance between the object and the navigation camera; when the object is judged to be a hardware fitting, judging whether the distance between the hardware fitting and the navigation camera is smaller than a preset distance, if so, judging that the inspection robot enters a bridge crossing state, and executing the step C; and C: acquiring a video frame of the monitoring camera, performing object recognition again through a monitoring camera recognition submodule of the image recognition module, measuring and calculating the actual wheel distance between the target hardware fitting and the inspection robot, and executing the step D; step D: the image identification module sends an obstacle crossing strategy instruction to the motion control module, and the motion control module executes obstacle crossing operation according to the obstacle crossing strategy instruction.

Description

Visual identification method and system for inspection robot
Technical Field
The invention relates to the technical field of inspection robots, in particular to a visual identification method and a visual identification system of an inspection robot.
Background
With the rapid development of society and economy, the demands of residents and industry for power utilization are continuously rising. The safety state of the transmission line can directly affect the stable operation of the power grid and the national economic development. The inspection robot who disposes a plurality of high definition cameras is as a novel, high-efficient, intelligent online equipment of patrolling and examining, is replacing the artifical mode of patrolling and examining of tradition gradually, promotes the online work efficiency of patrolling and examining and patrols and examines the precision. When the inspection robot inspects the high-voltage line, two traveling wheels are required to move, the whole power transmission line is required to climb 10 or even more than 100 electric towers in the inspection process, the tower inner structure of each electric tower consists of a plurality of hardware fittings, and the inspection robot inspects the line and has to perform obstacle crossing operation;
at present, a line patrol robot is controlled by manual control or a preset database, the requirement on the capability of a controller is high, the workload of setting the preset database by the controller is high, the line patrol robot needs to be manually controlled to carry out configuration of a tower-passing step database on each electric tower, and due to the influence of factors of a field environment, communication signals are unstable, an operator needs to follow the robot to go to each electric tower for operation, so that the line patrol efficiency is low, and the construction period is long; and the later-period line changes, the database needs to be manufactured again, and the later-period maintenance cost is higher.
Disclosure of Invention
The invention aims to provide a visual identification method and a visual identification system for an inspection robot, aiming at the defects in the background technology, wherein a network camera is installed in a specific installation mode, and the inspection robot is controlled to automatically cross the obstacle based on an image identification module, so that the cost of manual operation can be effectively reduced, the inspection efficiency is improved, and the time consumption of the construction period is shortened. Meanwhile, the investment of later-stage labor cost is reduced, and the cost is saved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a visual identification method of a line patrol robot comprises the following steps:
step A: determining the advancing direction of the inspection robot, and establishing communication connection between the image recognition module and the motion control module;
and B: acquiring a video frame of a navigation camera, and performing object recognition on the video frame through a navigation camera recognition sub-module of an image recognition module, wherein the object recognition comprises judging whether an object is a hardware fitting or not and measuring and calculating the distance between the object and the navigation camera;
when the object is judged to be a hardware fitting, judging whether the distance between the hardware fitting and the navigation camera is smaller than a preset distance, if so, judging that the inspection robot enters a bridge crossing state, and executing the step C;
and C: acquiring a video frame of the monitoring camera, performing object recognition again through a monitoring camera recognition submodule of the image recognition module, measuring and calculating the actual wheel distance between the target hardware fitting and the inspection robot, and executing the step D;
step D: the image identification module sends an obstacle crossing strategy instruction to the motion control module, and the motion control module executes obstacle crossing operation according to the obstacle crossing strategy instruction.
Preferably, in the step B, the identifying step of the navigation camera identification sub-module includes:
step B1: acquiring a real-time video frame of a navigation camera;
step B2: sequentially carrying out image distortion correction, image level correction, image slicing, image noise reduction and image enhancement on the video frame;
step B3: detecting hardware, namely judging whether the video frame has hardware or not, and if so, executing the step B4;
step B4: and extracting the hardware contour, acquiring the minimum external rectangle of the target hardware, acquiring the pixel area of the minimum external rectangle, and acquiring the actual distance between the target hardware and the navigation camera through the pixel area of the minimum external rectangle.
Preferably, the step B4 includes:
acquiring an estimated actual distance from a target hardware to a navigation camera according to a formula I;
Figure 535253DEST_PATH_IMAGE001
-formula one;
wherein:
f represents the distance of the image plane to the navigation camera plane;
d represents the estimated actual distance from the target hardware to the navigation camera;
a represents the pixel size of the target hardware on an image plane;
and B represents the actual size of the target hardware.
Preferably, the actual size of the target hardware fitting is obtained, including the actual width of the hardware fitting;
acquiring the pixel size of a target hardware on an image plane, including the pixel width of the hardware;
acquiring an estimated actual distance from the target hardware to the navigation camera in the width direction according to a formula II;
Figure 133724DEST_PATH_IMAGE002
- - -formula two;
wherein:
Figure 416938DEST_PATH_IMAGE003
representing the estimated actual distance from the target hardware to the navigation camera in the width direction;
w represents the actual width of the target hardware;
f represents the distance of the image plane to the navigation camera plane;
Figure 941198DEST_PATH_IMAGE004
the pixel width of the target hardware is represented.
Preferably, the actual size of the target hardware including the actual height of the hardware is obtained;
acquiring the pixel size of a target hardware on an image plane, including the pixel height of the hardware;
acquiring an estimated actual distance from the target hardware to the navigation camera in the high direction according to a formula III;
Figure 677073DEST_PATH_IMAGE005
- - -formula three;
wherein:
Figure 344815DEST_PATH_IMAGE006
indicating in the elevation direction, the target fitting toAn estimated actual distance of the navigation camera;
h represents the actual height of the target hardware;
f represents the distance of the image plane to the navigation camera plane;
Figure 482535DEST_PATH_IMAGE007
the pixel representing the target fitting is high.
Preferably, the actual size of the target hardware fitting is obtained, the actual width and the actual height of the hardware fitting are included, and the minimum circumscribed rectangular area of the target hardware fitting is obtained according to the actual width and the actual height;
acquiring the actual distance of the minimum error according to the fourth formula according to the estimated actual distances from the target hardware in the width direction and the height direction to the navigation camera;
Figure 944740DEST_PATH_IMAGE008
- - -formula four;
wherein:
d represents the actual distance from the target hardware to the navigation camera, namely the minimum error actual distance;
Figure 902332DEST_PATH_IMAGE009
representing the estimated actual distance from the target hardware to the navigation camera in the width direction;
Figure 108186DEST_PATH_IMAGE010
representing the estimated actual distance from the target hardware to the navigation camera in the high direction;
w represents the actual width of the target hardware;
f represents the distance of the image plane to the navigation camera plane;
Figure 598947DEST_PATH_IMAGE011
the pixel width of the target hardware is represented;
h represents the actual height of the target hardware;
Figure 497633DEST_PATH_IMAGE012
the pixel height of the target hardware is represented;
and S represents the minimum circumscribed rectangular area of the target hardware.
Preferably, in the step B, judging whether the distance between the hardware and the navigation camera is less than a preset distance includes judging whether the distance between the target hardware and an estimated wheel of the line patrol robot is less than a preset distance;
acquiring the estimated wheel distance between the target hardware fitting and the line patrol robot according to a formula V;
Figure 942521DEST_PATH_IMAGE013
- - -formula five;
wherein:
Figure 686486DEST_PATH_IMAGE014
representing the distance between the target hardware fitting and the estimated wheel of the line inspection robot;
d represents the estimated actual distance from the target hardware to the navigation camera, namely the minimum error actual distance;
i denotes the wheel distance of the navigation camera to the line patrol robot.
Preferably, in the step C, the measuring and calculating the actual wheel distance of the line patrol robot of the target hardware includes:
acquiring the actual wheel distance between a target hardware fitting and the inspection robot according to a formula six;
Figure 798799DEST_PATH_IMAGE015
wherein:
Figure 602807DEST_PATH_IMAGE016
representing the actual distance between the target hardware fitting and the wheels of the inspection robot;
Figure 534991DEST_PATH_IMAGE017
representing the pixel distance between the target hardware fitting and the wheel of the inspection robot;
n represents a scale bar.
Preferably, in the step B3, the determining whether the hardware is present in the video frame includes the following steps:
step B31: inputting the video frame image processed in the steps B1 and B2 into a trained convolutional neural network;
b32, acquiring the type and confidence of the target in the video frame image, judging whether the confidence of the target hardware reaches a threshold value, if so, judging that the current target hardware is the target; if not, acquiring the next frame of video frame image, and re-executing the step B31. A vision identification system of a line patrol robot is applied with the vision identification method of the line patrol robot, and the line patrol robot is provided with a navigation camera, a monitoring camera, an image identification module and a motion control module;
the navigation camera and the monitoring camera are respectively used for providing real-time video frames for the image recognition module;
the image identification module is used for carrying out real-time target detection on real-time video frames, calculating the real-time distance between an obstacle and a wheel and generating an obstacle crossing strategy instruction;
and the motion control module is used for executing obstacle crossing action according to the obstacle crossing strategy instruction.
Preferably, the navigation camera is installed in front of the inspection robot;
the monitoring camera is installed in the top of patrolling the line robot to make the walking wheel of patrolling the line robot fall into the shooting scope of monitoring camera.
The beneficial effect that this application's technical scheme produced:
the invention adopts the visual identification method, can keep the navigation camera to always observe the front, effectively enlarges the observation visual field range, switches to the monitoring camera to judge the actual distance between the obstacle and the inspection robot when the inspection robot encounters the obstacle based on the visual identification, realizes the accurate control of the distance between the walking wheel and the obstacle, and provides the actual basis for the subsequent obstacle crossing action. .
Drawings
Fig. 1 is a flowchart of a visual recognition method of a patrol robot according to an embodiment of the present invention;
FIG. 2 is an installation schematic diagram of an inspection robot installation navigation camera and a surveillance camera of one embodiment of the invention;
FIG. 3 is a schematic plan view of an inspection robot with navigation cameras and surveillance cameras installed thereon according to an embodiment of the present invention
FIG. 4 is a schematic diagram of acquiring an actual distance of a target from a camera in a high direction according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of acquiring an actual distance from a walking wheel of the inspection robot to a target according to one embodiment of the invention;
fig. 6 is a frame diagram of a vision recognition system of a patrol robot according to an embodiment of the present invention;
FIG. 7 is a recognition framework diagram of the navigation camera recognition sub-module of one embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
At present, a line patrol robot is controlled by manual control or a preset database, the requirement on the capability of a controller is high, the workload of setting the preset database by the controller is high, the line patrol robot needs to be manually controlled to carry out configuration of a tower-passing step database on each electric tower, and due to the influence of factors of a field environment, communication signals are unstable, an operator needs to follow the robot to go to each electric tower for operation, so that the line patrol efficiency is low, and the construction period is long; and the later-period line changes, the database needs to be manufactured again, and the later-period maintenance cost is higher. In order to solve the above problem, the present application proposes a visual identification method for a patrol robot, as shown in fig. 1, including the following steps:
step A: determining the advancing direction of the inspection robot, and establishing communication connection between the image recognition module and the motion control module;
and B: acquiring a video frame of a navigation camera, and performing object recognition on the video frame through a navigation camera recognition sub-module of an image recognition module, wherein the object recognition comprises judging whether an object is a hardware fitting or not and measuring and calculating the distance between the object and the navigation camera;
when the object is judged to be a hardware fitting, judging whether the distance between the hardware fitting and the navigation camera is smaller than a preset distance, if so, judging that the inspection robot enters a bridge crossing state, and executing the step C;
and C: acquiring a video frame of the monitoring camera, performing object recognition again through a monitoring camera recognition submodule of the image recognition module, measuring and calculating the actual wheel distance between the target hardware fitting and the inspection robot, and executing the step D;
step D: the image identification module sends an obstacle crossing strategy instruction to the motion control module, and the motion control module executes obstacle crossing operation according to the obstacle crossing strategy instruction.
In the embodiment, after the advancing direction of the inspection robot is determined, the image recognition module is connected with the motion control module, the image recognition module is used for recognizing and analyzing the image, and sending an obstacle crossing strategy to the motion control module according to the recognition and analysis result, so that the motion control module can control the inspection robot to execute obstacle crossing operation according to an obstacle crossing strategy instruction;
the navigation camera is arranged in front of the inspection robot;
the monitoring camera is installed in the top of patrolling the line robot to make the walking wheel of patrolling the line robot fall into the shooting scope of monitoring camera.
In this embodiment, the inspection robot is provided with two navigation cameras and two monitoring cameras, as shown in fig. 3, the installation angle of the navigation cameras is 20 degrees (plus-minus 5-degree interval range) of horizontal inclination, and the installation angle of the monitoring cameras is 40 degrees (plus-minus 5-degree interval range) of horizontal inclination. The navigation camera needs to be arranged on the lower portion of the robot in order to obtain a better visual field, the lower portion in the embodiment can be understood as the lower portion of the body of the inspection robot, the mechanical arm is arranged on the upper portion of the body of the inspection robot, the monitoring camera needs to be arranged on the upper portion of the robot in order to see wheels on the mechanical arm, the monitoring camera needs to incline by a certain angle to ensure that the wheels can be observed, the monitoring camera is in a closed state at ordinary times, the specific installation angle is shown in fig. 2, under the installation angle, the optimal image position of an object target image to be detected in a video frame can be ensured, the inspection robot can recognize a front target in advance, an effective obstacle crossing strategy is executed, the image visual angle is better, and the angle is the optimal angle after the test is set.
The process of the image recognition module performing recognition analysis on the image may be: firstly, acquiring a video frame of a navigation camera, identifying an object by the video frame acquired by the navigation camera, judging whether a hardware fitting exists or not, acquiring the distance between the hardware fitting and the navigation camera when the hardware fitting exists, judging whether the distance between the hardware fitting and the navigation camera is lower than a preset distance or not, if so, judging that the inspection robot enters a bridge-crossing state, wherein the distance between the hardware fitting and the navigation camera is a rough wheel distance, namely the rough distance between the hardware fitting and a wheel of the inspection robot, and when the distance is reduced to a certain degree, considering that the inspection robot enters the bridge-crossing state, closing the navigation camera and starting the monitoring camera;
the method comprises the steps of obtaining a video frame of a monitoring camera, carrying out object recognition on the video frame, measuring and calculating the actual wheel distance between a target hardware and a wheel of the inspection robot, and generating a corresponding obstacle crossing strategy instruction according to the actual wheel distance.
Preferably, as shown in fig. 7, in the step B, the identifying step of the navigation camera identification sub-module includes:
step B1: acquiring a real-time video frame of a navigation camera;
step B2: sequentially carrying out image distortion correction, image level correction, image slicing, image noise reduction and image enhancement on the video frame;
step B3: detecting hardware, namely judging whether the video frame has hardware or not, and if so, executing the step B4;
step B4: and extracting the hardware contour, acquiring the minimum external rectangle of the target hardware, acquiring the pixel area of the minimum external rectangle, and acquiring the actual distance between the target hardware and the navigation camera through the pixel area of the minimum external rectangle.
In the embodiment, the image identification module mainly comprises three parts, namely an image preprocessing part, an image slicing part, an image denoising part and an image enhancement part, wherein the image preprocessing part comprises image distortion correction, image level correction, image slicing, image denoising and image enhancement; secondly, a target identification detection part mainly comprises hardware detection and walking wheel detection; and thirdly, a distance conversion part mainly comprises contour extraction, minimum circumscribed rectangles, calculation of the center and area of the rectangles and conversion of actual distances through an actual scale. Through the technology, the actual distance between the target and the inspection robot can be effectively obtained, so that the inspection robot is guided to execute a corresponding obstacle crossing strategy.
Preferably, as shown in fig. 4, the step B4 includes:
acquiring an estimated actual distance from a target hardware to a navigation camera according to a formula I;
Figure 581182DEST_PATH_IMAGE019
-formula one;
wherein:
f represents the distance of the image plane to the navigation camera plane;
d represents the estimated actual distance from the target hardware to the navigation camera;
a represents the pixel size of the target hardware on an image plane;
and B represents the actual size of the target hardware.
It should be noted that the parameter d represents a broad meaning in the formula, and specifically includes the actual distance from the target hardware to the navigation camera in the wide direction and the actual distance from the target hardware to the navigation camera in the high direction.
According to the formula I, the estimation of the target hardware fitting to the navigation camera can be knownCalculating the actual distance can be deformed into
Figure 548001DEST_PATH_IMAGE020
Preferably, the actual size of the target hardware fitting is obtained, including the actual width of the hardware fitting;
acquiring the pixel size of a target hardware on an image plane, including the pixel width of the hardware;
acquiring an estimated actual distance from the target hardware to the navigation camera in the width direction according to a formula II;
Figure 788489DEST_PATH_IMAGE022
- - -formula two;
wherein:
Figure 942390DEST_PATH_IMAGE023
representing the estimated actual distance from the target hardware to the navigation camera in the width direction;
w represents the actual width of the target hardware;
f represents the distance of the image plane to the navigation camera plane;
Figure 293737DEST_PATH_IMAGE024
the pixel width of the target hardware is represented.
In the embodiment, the pixel width of the target hardware is set to be wide
Figure 115062DEST_PATH_IMAGE024
Substituting the actual width W of the target hardware into B to obtain a formula II;
preferably, the actual size of the target hardware including the actual height of the hardware is obtained;
acquiring the pixel size of a target hardware on an image plane, including the pixel height of the hardware;
acquiring an estimated actual distance from the target hardware to the navigation camera in the high direction according to a formula III;
Figure 526452DEST_PATH_IMAGE026
- - -formula three;
wherein:
Figure 931763DEST_PATH_IMAGE027
representing the estimated actual distance from the target hardware to the navigation camera in the high direction;
h represents the actual height of the target hardware;
f represents the distance of the image plane to the navigation camera plane;
Figure 821222DEST_PATH_IMAGE028
the pixel representing the target fitting is high.
Similarly, the pixel of the target hardware is made high
Figure 762633DEST_PATH_IMAGE029
Substituting the actual height H of the target hardware into B to obtain a formula III;
preferably, the actual size of the target hardware fitting is obtained, the actual width and the actual height of the hardware fitting are included, and the minimum circumscribed rectangular area of the target hardware fitting is obtained according to the actual width and the actual height;
preferably, the actual size of the target hardware fitting is obtained, including the actual length of the hardware fitting, and the minimum circumscribed rectangular area of the target hardware fitting is obtained according to the actual length, width and height;
the method comprises the steps that internal parameters f of a camera can be obtained through calibration of the camera, meanwhile, the size of a hardware fitting is actually measured by using measuring tools such as a vernier caliper and the like, the size is stored in a database, when the type of the hardware fitting is detected and identified, the data of the corresponding hardware fitting is searched in the database, the actual width and the actual height of the hardware fitting can be obtained, how the estimated wheel distance from the hardware fitting to a line patrol robot is obtained can be known through a formula I, however, in order to be more accurate, the estimated wheel distances in the height direction and the width direction of the hardware fitting need to be calculated through a formula II and a formula III respectively, then the estimated wheel distances in the width direction and the height direction are substituted into a formula IV, a balanced minimum error actual distance is calculated, and the minimum error actual distance is the distance from a navigation camera to the hardware fitting actually;
acquiring the actual distance of the minimum error according to the fourth formula according to the estimated actual distances from the target hardware in the width direction and the height direction to the navigation camera;
Figure 79345DEST_PATH_IMAGE031
- - -formula four;
wherein:
d represents the actual distance from the target hardware to the navigation camera, namely the minimum error actual distance;
Figure 473417DEST_PATH_IMAGE032
representing the estimated actual distance from the target hardware to the navigation camera in the width direction;
Figure 166567DEST_PATH_IMAGE033
representing the estimated actual distance from the target hardware to the navigation camera in the high direction;
w represents the actual width of the target hardware;
f represents the distance of the image plane to the navigation camera plane;
Figure 696905DEST_PATH_IMAGE034
the pixel width of the target hardware is represented;
h represents the actual height of the target hardware;
Figure 450098DEST_PATH_IMAGE035
the pixel height of the target hardware is represented;
and S represents the minimum circumscribed rectangular area of the target hardware.
Preferably, in the step B, judging whether the distance between the hardware and the navigation camera is less than a preset distance includes judging whether the distance between the target hardware and an estimated wheel of the line patrol robot is less than a preset distance;
in the above, the distance from the navigation camera to the hardware fitting is obtained through the formula four, the obtained distance is substituted into the formula five, the estimated wheel distance between the target hardware fitting and the line patrol robot can be obtained, the estimated wheel distance is not an accurate distance, when the estimated wheel distance is smaller than the preset distance, the line patrol robot is considered to enter a bridge crossing state, the monitoring camera is started at the moment, object recognition is carried out, and the actual distance between the wheels of the line patrol robot and the hardware fitting can be measured and calculated;
as shown in fig. 5, obtaining the estimated wheel distance between the target hardware and the inspection robot according to the formula five;
Figure 830001DEST_PATH_IMAGE036
- - -formula five;
wherein:
Figure 326842DEST_PATH_IMAGE037
representing the distance between the target hardware fitting and the estimated wheel of the line inspection robot;
d represents the estimated actual distance from the target hardware to the navigation camera, namely the minimum error actual distance;
i denotes the wheel distance of the navigation camera to the line patrol robot.
Preferably, in the step C, the measuring and calculating the actual wheel distance of the line patrol robot of the target hardware includes:
acquiring the actual wheel distance between a target hardware fitting and the inspection robot according to a formula six;
Figure 977266DEST_PATH_IMAGE038
wherein:
Figure 635780DEST_PATH_IMAGE039
representing the actual distance between the target hardware fitting and the wheels of the inspection robot;
Figure 4445DEST_PATH_IMAGE040
representing the pixel distance between the target hardware fitting and the wheel of the inspection robot;
n represents a scale bar.
Because the angle of the monitoring camera is caused, the monitoring camera can directly observe the pixel distance between the wheel and the hardware fitting, and the actual distance can be obtained according to the conversion of the scale; when the position is fixed, the proportion of scale is the same, need measure at fixed distance and just know concrete proportion, consequently obtain corresponding scale through actual measurement in advance, through the pixel distance of monitoring camera direct observation wheel and gold utensil, can know the actual distance of target gold utensil and inspection robot's wheel.
Preferably, in the step B3, the determining whether the hardware is present in the video frame includes the following steps:
step B31: inputting the video frame image processed in the steps B1 and B2 into a trained convolutional neural network;
b32, acquiring the type and confidence of the target in the video frame image, judging whether the confidence of the target hardware reaches a threshold value, if so, judging that the current target hardware is the target; if not, acquiring the next frame of video frame image, and re-executing the step B31.
In the present application, the training process of the neural network includes the following steps:
the method comprises the following steps: normalizing the input image data and converting the normalized image data into a single-channel matrix format;
step two: performing linear data conversion on the data processed in the step one by using an activation function Mish activation function to convert the data into nonlinear data;
step three: inputting nonlinear data into a convolutional neural network to carry out convolution operation, extracting characteristic information of image data, and regressing and classifying a Prediction frame of an obstacle and a type Prediction of the obstacle;
step four: calculating the gap between the Prediction and the real obstacle information True in the test set by using a Loss function CIOU _ Loss;
wherein the loss function is:
Figure 773818DEST_PATH_IMAGE041
wherein: IOU represents the proportion of the rectangular superposed area of Prediction and True to the rectangular area of True;
Figure 544327DEST_PATH_IMAGE042
representing Euclidean distance between the coordinate of the center point of the rectangle of the Prediction and the coordinate of the center point of the rectangle of the True;
Figure 872278DEST_PATH_IMAGE043
the length of the rectangle diagonal representing True;
Figure 728239DEST_PATH_IMAGE044
the width of the rectangle representing True;
Figure 566882DEST_PATH_IMAGE045
the height of the rectangle representing True;
Figure 660740DEST_PATH_IMAGE046
width of the rectangle representing the Prediction;
Figure DEST_PATH_IMAGE048
the height of the rectangle representing the Prediction;
step five: carrying out reverse solution on the result of the Loss function CIOU _ Loss by using a random gradient descent method in the optimization method, and carrying out iterative computation training by taking the optimal solution as a new round of input data;
a visual identification system of a line patrol robot, as shown in fig. 6, applies any one of the visual identification methods of the line patrol robot, and the line patrol robot is provided with a navigation camera, a monitoring camera, an image identification module and a motion control module;
the navigation camera and the monitoring camera are respectively used for providing real-time video frames for the image recognition module;
the image identification module is used for carrying out real-time target detection on real-time video frames, calculating the real-time distance between an obstacle and a wheel and generating an obstacle crossing strategy instruction;
and the motion control module is used for executing obstacle crossing action according to the obstacle crossing strategy instruction.
The technical principle of the present invention is described above in connection with specific embodiments. The description is made for the purpose of illustrating the principles of the invention and should not be construed in any way as limiting the scope of the invention. Based on the explanations herein, those skilled in the art will be able to conceive of other embodiments of the present invention without inventive effort, which would fall within the scope of the present invention.

Claims (8)

1. A visual identification method of a line patrol robot is characterized in that:
the method comprises the following steps:
step A: determining the advancing direction of the inspection robot, and establishing communication connection between the image recognition module and the motion control module;
and B: acquiring a video frame of a navigation camera, and performing object recognition on the video frame through a navigation camera recognition sub-module of an image recognition module, wherein the object recognition comprises judging whether an object is a hardware fitting or not and measuring and calculating the distance between the object and the navigation camera;
when the object is judged to be a hardware fitting, judging whether the distance between the hardware fitting and the navigation camera is smaller than a preset distance, if so, judging that the inspection robot enters a bridge crossing state, and executing the step C;
and C: acquiring a video frame of the monitoring camera, performing object recognition again through a monitoring camera recognition submodule of the image recognition module, measuring and calculating the actual wheel distance between the target hardware fitting and the inspection robot, and executing the step D;
step D: the image identification module sends an obstacle crossing strategy instruction to the motion control module, and the motion control module executes obstacle crossing operation according to the obstacle crossing strategy instruction;
in step B, the identifying step of the navigation camera identification submodule includes:
step B1: acquiring a real-time video frame of a navigation camera;
step B2: sequentially carrying out image distortion correction, image level correction, image slicing, image noise reduction and image enhancement on the video frame;
step B3: detecting hardware, namely judging whether the video frame has hardware or not, and if so, executing the step B4;
step B4: extracting the hardware contour, acquiring a minimum external rectangle of the target hardware, acquiring the pixel area of the minimum external rectangle, and acquiring the actual distance between the target hardware and the navigation camera through the pixel area of the minimum external rectangle;
the step B4 includes:
acquiring an estimated actual distance from a target hardware to a navigation camera according to a formula I;
Figure 576920DEST_PATH_IMAGE002
-formula one;
wherein:
f represents the distance of the image plane to the navigation camera plane;
d represents the estimated actual distance from the target hardware to the navigation camera;
a represents the pixel size of the target hardware on an image plane;
and B represents the actual size of the target hardware.
2. The visual recognition method of the inspection robot according to claim 1, characterized in that:
acquiring the actual size of a target hardware fitting, including the actual width of the hardware fitting;
acquiring the pixel size of a target hardware on an image plane, including the pixel width of the hardware;
acquiring an estimated actual distance from the target hardware to the navigation camera in the width direction according to a formula II;
Figure 528696DEST_PATH_IMAGE003
- - -formula two;
wherein:
Figure 299468DEST_PATH_IMAGE004
representing the estimated actual distance from the target hardware to the navigation camera in the width direction;
w represents the actual width of the target hardware;
f represents the distance of the image plane to the navigation camera plane;
Figure 944076DEST_PATH_IMAGE005
the pixel width of the target hardware is represented;
acquiring the actual size of a target hardware fitting, including the actual height of the hardware fitting;
acquiring the pixel size of a target hardware on an image plane, including the pixel height of the hardware;
acquiring an estimated actual distance from the target hardware to the navigation camera in the high direction according to a formula III;
Figure 666044DEST_PATH_IMAGE006
- - -formula three;
wherein:
Figure 421511DEST_PATH_IMAGE007
representing the estimated actual distance from the target hardware to the navigation camera in the high direction;
h represents the actual height of the target hardware;
f represents the distance of the image plane to the navigation camera plane;
Figure 14166DEST_PATH_IMAGE008
the pixel representing the target fitting is high.
3. The visual recognition method of the inspection robot according to claim 2, characterized in that:
acquiring the actual size of a target hardware fitting, including the actual width and the actual height of the hardware fitting, and acquiring the minimum circumscribed rectangular area of the target hardware fitting according to the actual width and the actual height;
acquiring the actual distance of the minimum error according to the fourth formula according to the estimated actual distances from the target hardware in the width direction and the height direction to the navigation camera;
Figure 95255DEST_PATH_IMAGE009
- - -formula four;
wherein:
d represents the actual distance from the target hardware to the navigation camera, namely the minimum error actual distance;
Figure 38940DEST_PATH_IMAGE004
representing the estimated actual distance from the target hardware to the navigation camera in the width direction;
Figure 365141DEST_PATH_IMAGE007
representing the estimated actual distance from the target hardware to the navigation camera in the high direction;
w represents the actual width of the target hardware;
f represents the distance of the image plane to the navigation camera plane;
Figure 77882DEST_PATH_IMAGE005
the pixel width of the target hardware is represented;
h represents the actual height of the target hardware;
Figure 798714DEST_PATH_IMAGE008
the pixel height of the target hardware is represented;
and S represents the minimum circumscribed rectangular area of the target hardware.
4. The visual recognition method of the inspection robot according to claim 1, characterized in that:
in the step B, judging whether the distance between the hardware fitting and the navigation camera is lower than a preset distance or not comprises judging whether the distance between the target hardware fitting and an estimated wheel of the line patrol robot is lower than the preset distance or not;
acquiring the estimated wheel distance between the target hardware fitting and the line patrol robot according to a formula V;
Figure 495274DEST_PATH_IMAGE010
- - -formula five;
wherein:
Figure 858122DEST_PATH_IMAGE012
representing the distance between the target hardware fitting and the estimated wheel of the line inspection robot;
d represents the estimated actual distance from the target hardware to the navigation camera, namely the minimum error actual distance;
i denotes the wheel distance of the navigation camera to the line patrol robot.
5. The visual recognition method of the inspection robot according to claim 4, characterized in that:
in the step C, the calculating an actual wheel distance of the line patrol robot of the target hardware includes:
acquiring the actual wheel distance between a target hardware fitting and the inspection robot according to a formula six;
Figure 690949DEST_PATH_IMAGE013
wherein:
Figure 349726DEST_PATH_IMAGE014
representing the actual distance between the target hardware fitting and the wheels of the inspection robot;
Figure 268003DEST_PATH_IMAGE015
representing the pixel distance between the target hardware fitting and the wheel of the inspection robot;
n represents a scale bar.
6. The visual recognition method of the inspection robot according to claim 1, characterized in that:
in step B3, the step of determining whether hardware is present in the video frame includes the following steps:
step B31: inputting the video frame image processed in the steps B1 and B2 into a trained convolutional neural network;
b32, acquiring the type and confidence of the target in the video frame image, judging whether the confidence of the target hardware reaches a threshold value, if so, judging that the current target hardware is the target; if not, acquiring the next frame of video frame image, and re-executing the step B31.
7. The utility model provides a visual identification system of inspection robot which characterized in that: the visual identification method of the inspection robot is applied to the inspection robot according to any one of claims 1 to 6, and the inspection robot is provided with a navigation camera, a monitoring camera, an image identification module and a motion control module;
the navigation camera and the monitoring camera are respectively used for providing real-time video frames for the image recognition module;
the image identification module is used for carrying out real-time target detection on real-time video frames, calculating the real-time distance between an obstacle and a wheel and generating an obstacle crossing strategy instruction;
and the motion control module is used for executing obstacle crossing action according to the obstacle crossing strategy instruction.
8. The vision recognition system of an inspection robot according to claim 7, characterized in that:
the navigation camera is arranged in front of the inspection robot;
the monitoring camera is installed in the top of patrolling the line robot to make the walking wheel of patrolling the line robot fall into the shooting scope of monitoring camera.
CN202111558006.7A 2021-12-20 2021-12-20 Visual identification method and system for inspection robot Active CN113946154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111558006.7A CN113946154B (en) 2021-12-20 2021-12-20 Visual identification method and system for inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111558006.7A CN113946154B (en) 2021-12-20 2021-12-20 Visual identification method and system for inspection robot

Publications (2)

Publication Number Publication Date
CN113946154A CN113946154A (en) 2022-01-18
CN113946154B true CN113946154B (en) 2022-04-22

Family

ID=79339203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111558006.7A Active CN113946154B (en) 2021-12-20 2021-12-20 Visual identification method and system for inspection robot

Country Status (1)

Country Link
CN (1) CN113946154B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112684806A (en) * 2019-10-18 2021-04-20 天津工业大学 Electric power inspection unmanned aerial vehicle system based on dual obstacle avoidance and intelligent identification

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018058356A1 (en) * 2016-09-28 2018-04-05 驭势科技(北京)有限公司 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision
CN110687904B (en) * 2019-12-09 2020-08-04 广东科凯达智能机器人有限公司 Visual navigation routing inspection and obstacle avoidance method for inspection robot
CN111738189A (en) * 2020-06-29 2020-10-02 广东电网有限责任公司 Transmission line crimping hardware inspection control method, device, terminal and medium
CN112000094A (en) * 2020-07-20 2020-11-27 山东科技大学 Single-and-double-eye combined high-voltage transmission line hardware fitting online identification and positioning system and method
CN112621710A (en) * 2020-12-16 2021-04-09 国电南瑞科技股份有限公司 Obstacle detection control system and method for overhead transmission line inspection robot

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112684806A (en) * 2019-10-18 2021-04-20 天津工业大学 Electric power inspection unmanned aerial vehicle system based on dual obstacle avoidance and intelligent identification

Also Published As

Publication number Publication date
CN113946154A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
CN108537154B (en) Power transmission line bird nest identification method based on HOG characteristics and machine learning
CN109159113B (en) Robot operation method based on visual reasoning
CN105469069A (en) Safety helmet video detection method for production line data acquisition terminal
CN106780483A (en) Many continuous casting billet end face visual identifying systems and centre coordinate acquiring method
CN114511519A (en) Train bottom bolt loss detection method based on image processing
CN112508911A (en) Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof
CN112000094A (en) Single-and-double-eye combined high-voltage transmission line hardware fitting online identification and positioning system and method
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN115439643A (en) Road disease size analysis and management method based on monocular measurement
CN115018872A (en) Intelligent control method of dust collection equipment for municipal construction
CN111967323B (en) Electric power live working safety detection method based on deep learning algorithm
CN113946154B (en) Visual identification method and system for inspection robot
CN113469938A (en) Pipe gallery video analysis method and system based on embedded front-end processing server
CN116866520B (en) AI-based monorail crane safe operation real-time monitoring management system
CN113949142B (en) Inspection robot autonomous charging method and system based on visual recognition
CN114943904A (en) Operation monitoring method based on unmanned aerial vehicle inspection
CN113989209A (en) Power line foreign matter detection method based on fast R-CNN
CN114442658A (en) Automatic inspection system for unmanned aerial vehicle of power transmission and distribution line and operation method thereof
CN110516551B (en) Vision-based line patrol position deviation identification system and method and unmanned aerial vehicle
CN103208009A (en) Power transmission line vehicle-mounted inspection image classification method
CN112651276A (en) Power transmission channel early warning system based on double-light fusion and early warning method thereof
CN116912721B (en) Power distribution network equipment body identification method and system based on monocular stereoscopic vision
Lu et al. Design and implement of control system for power substation equipment inspection robot
CN111583176B (en) Image-based lightning protection simulation disc element fault detection method and system
CN113657144B (en) Rapid detection and tracking method for navigation ship in bridge area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Visual Recognition Method and System for a Line Patrol Robot

Effective date of registration: 20231107

Granted publication date: 20220422

Pledgee: Shunde Guangdong rural commercial bank Limited by Share Ltd. Daliang branch

Pledgor: GUANGDONG KEYSTAR INTELLIGENCE ROBOT Co.,Ltd.

Registration number: Y2023980064495

PE01 Entry into force of the registration of the contract for pledge of patent right