CN113420810A - Cable trench intelligent inspection system and method based on infrared and visible light - Google Patents

Cable trench intelligent inspection system and method based on infrared and visible light Download PDF

Info

Publication number
CN113420810A
CN113420810A CN202110695138.8A CN202110695138A CN113420810A CN 113420810 A CN113420810 A CN 113420810A CN 202110695138 A CN202110695138 A CN 202110695138A CN 113420810 A CN113420810 A CN 113420810A
Authority
CN
China
Prior art keywords
image
cable
inspection
artificial intelligence
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110695138.8A
Other languages
Chinese (zh)
Other versions
CN113420810B (en
Inventor
钟伦珑
黄荣辉
李弘宇
王程鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Energy Engineering Group Guangxi Electric Power Design Institute Co ltd
Civil Aviation University of China
Original Assignee
China Energy Engineering Group Guangxi Electric Power Design Institute Co ltd
Civil Aviation University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Energy Engineering Group Guangxi Electric Power Design Institute Co ltd, Civil Aviation University of China filed Critical China Energy Engineering Group Guangxi Electric Power Design Institute Co ltd
Priority to CN202110695138.8A priority Critical patent/CN113420810B/en
Publication of CN113420810A publication Critical patent/CN113420810A/en
Application granted granted Critical
Publication of CN113420810B publication Critical patent/CN113420810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S40/00Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them
    • Y04S40/12Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them characterised by data transport means between the monitoring, controlling or managing units and monitored, controlled or operated electrical equipment
    • Y04S40/126Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them characterised by data transport means between the monitoring, controlling or managing units and monitored, controlled or operated electrical equipment using wireless data transmission

Abstract

The utility model provides a cable pit intelligence system of patrolling and examining and patrol and examine method based on infrared and visible light, there is the camera module that is connected with marginal artificial intelligence treater respectively, the light filling lamp, the SD card, be used to lead module and WIFI module, the camera module is used for gathering cable pit image data and sends into marginal artificial intelligence treater, and save to the SD card through marginal artificial intelligence treater, the light filling lamp is used for dynamic light filling, it is used for acquireing acceleration and course angle's data and sends into marginal artificial intelligence treater to lead the module, marginal artificial intelligence treater passes through the WIFI module and communicates with the host computer. The method comprises the following steps: judging whether the cable trench to be inspected currently is the first inspection; initializing a lightweight embedded local self-learning classification model; initializing parameters; major potential safety hazard investigation and routing inspection; potential safety hazard fixed point fault detection; and generating a patrol log report. The invention does not need the patrol personnel to supervise and observe the channel condition in the whole process, and can automatically complete the whole patrol process.

Description

Cable trench intelligent inspection system and method based on infrared and visible light
Technical Field
The invention relates to the technical field of cable trench inspection. In particular to an intelligent cable trench inspection system and an intelligent cable trench inspection method based on infrared and visible light.
Background
Cable trenches are underground pipes used for laying electrical or telecommunications cable installations. In the form of a cable trench, the cable trench is a short channel with a cover plate, and is generally formed by splicing a prefabricated combined unit structure. Due to the influence of the environment, dirt, accumulated water and accumulated heat are easy to deposit in the channel, and hidden danger is formed on the operation safety of the cable. In order to ensure the operation safety, the cable trench environment and the cable state need to be regularly checked.
The traditional cable trench inspection is that the manual work enters a narrow channel for inspection, so that the safety is poor, the work is complex, the strength is high, the inspection period is long, and the instantaneity cannot be guaranteed. Patent CN202011381783.4 proposes an improved cable trench inspection robot and a control method thereof, where the inspection robot can conveniently enter a cable trench and return the state parameters of the cable trench and the cable in real time, but the whole inspection process needs manual remote control supervision. Patent CN202010735272.1 proposes a cable trench fault detection method based on fusion of infrared and visible light images, which can accurately find faults such as cable damage, discharge, heating, etc., but lacks of checking the environment of the cable trench, and cannot form a complete set of automatic inspection method.
The development of the artificial intelligence technology is rapid, the edge artificial intelligence processor reaches the computational power level of an embedded processing artificial neural network model, the application of the technologies can improve the automation level of cable trench inspection, and the manual intervention required in the inspection process is reduced.
Disclosure of Invention
The invention aims to solve the technical problem of providing an infrared and visible light-based intelligent cable trench inspection system and an inspection method capable of carrying out sustainable danger identification.
The technical scheme adopted by the invention is as follows: the utility model provides a cable pit intelligence system of patrolling and examining based on infrared and visible light, including marginal artificial intelligence treater, respectively with marginal artificial intelligence treater camera module, light filling lamp, SD card, be used to lead module and WIFI module that are connected, the camera module be used for gathering cable pit image data and send into marginal artificial intelligence treater to through marginal artificial intelligence treater storage to SD card, the light filling lamp is used for the dynamic light filling, it is used for acquireing acceleration and course angle's data and sends into marginal artificial intelligence treater to lead the module, marginal artificial intelligence treater passes through WIFI module and host computer communicate.
An inspection method of the infrared and visible light based intelligent cable trench inspection system according to claim 1, comprising the following steps:
1) judging whether the cable trench to be inspected currently is the first inspection, if so, entering the step 2); if not, entering the step 3);
2) initializing a lightweight embedded local self-learning classification model, comprising: acquiring cable trench image data, and training a lightweight embedded local self-learning classification model by using the acquired cable trench image data;
3) initializing parameters: setting current polling task parameters;
4) the inspection of great potential safety hazard is patrolled and examined, carries out the comprehensive assessment to cable pit environment and cable, and whether the condition that has great potential safety hazard of inspection includes: circularly performing cable trench plugging detection, cable temperature and cable skin damage detection until a preset inspection distance is reached; the cable trench plugging detection is carried out on the basis that a visible light camera is over against an image acquired by a geometric center of the current cable trench section, and the cable temperature and cable skin damage detection is carried out on the basis of images acquired by an infrared camera and the visible light camera;
5) potential safety hazard fixed point fault detection: manually screening to determine potential safety hazards, and then performing fixed-point fault detection through the inspection system;
6) generating a patrol log report: and the edge artificial intelligence processor extracts and packs potential safety hazard troubleshooting and processing conditions generated in the whole polling process according to the sequence of events, generates a polling log report and stores the polling log report into the SD card.
Compared with the prior art, the intelligent cable trench inspection system and the intelligent cable trench inspection method based on infrared light and visible light have the following beneficial effects:
the invention relates to an infrared and visible light-based intelligent cable trench inspection system and an inspection method, wherein a lightweight embedded local self-learning classification model is deployed on an edge artificial intelligent processor, the obtained internal image of a cable trench is processed to be divided into major potential safety hazards and potential safety hazards, and an auxiliary judgment result is presented to an inspector in real time without the need of the inspector to supervise and observe the condition of the trench in the whole process; meanwhile, the invention can judge the routing inspection advancing situation and generate a routing inspection advancing path, and can automatically complete the whole routing inspection process when the cable trench is in a good state. The lightweight embedded local self-learning classification model applied by the invention has local self-adaptive capacity, and the classification model corresponding to the cable to be inspected can be obtained by quickly training by acquiring the information of the cable to be inspected, so that the reliability of the inspection result is improved, and the dependence on external hardware is reduced. The invention can be used for upgrading the existing cable trench inspection robot, can also be used for other temperature-sensitive and appearance integrity-sensitive inspection tasks, and has wide applicability.
Drawings
FIG. 1 is a block diagram of a cable trench intelligent inspection system based on infrared and visible light;
fig. 2 is a block diagram showing the construction of a camera module according to the present invention;
FIG. 3 is a block diagram showing the configuration of a camera selection switch according to the present invention;
FIG. 4 is a flow chart of the inspection method of the intelligent inspection system for cable ducts based on infrared light and visible light;
FIG. 5 is a flow chart of model initialization in the inspection method of the present invention;
fig. 6 is a flow chart of the cable trench plugging detection in step 4) of the inspection method of the present invention;
fig. 7 is a flow chart of the cable temperature and the cable skin breakage detection in step 4) of the inspection method of the present invention.
In the drawings
1: edge artificial intelligence processor 2: camera module
2.1: camera selection switch 2.1.1: switch chip
2.2: infrared camera 2.3: visible light camera
3: and a light supplement lamp 4: SD card
5: the inertial navigation module 6: WIFI Module
Detailed Description
The following describes in detail an infrared and visible light-based cable trench intelligent inspection system and inspection method according to the present invention with reference to the following embodiments and accompanying drawings.
As shown in fig. 1, the infrared and visible light-based intelligent cable trench inspection system comprises an edge artificial intelligence processor 1, and a camera module 2, a light supplement lamp 3, an SD card 4, an inertial navigation module 5 and a WIFI module 6 which are respectively connected to the edge artificial intelligence processor 1, wherein the camera module 2 is used for collecting cable trench image data and sending the cable trench image data to the edge artificial intelligence processor 1, and storing the cable trench image data to the SD card 4 through the edge artificial intelligence processor 1, the light supplement lamp 3 is used for dynamically supplementing light, the inertial navigation module 5 is used for acquiring acceleration and course angle data and sending the acceleration and course angle data to the edge artificial intelligence processor 1, and the edge artificial intelligence processor 1 communicates with an upper computer through the WIFI module 6.
The edge artificial intelligence processor 1 is an embedded processor with SRAM of at least 8MB, a fast arithmetic unit FPU and a computing power of not less than 0.5TOPS, and a lightweight embedded local self-learning classification model is deployed in the embedded processor. The edge artificial intelligence processor 1 is characterized in that default classes are prestored in an SRAM (static random access memory), wherein the default classes are used for referring to cable identification clustering and comprise cable damage characteristic identification clustering, cable bracket identification clustering and garbage identification clustering with fixed pixel size, but do not comprise local classes, and the local classes are image information clustering of cables to be patrolled and examined currently.
As shown in fig. 2, the camera module 2 includes an infrared camera 2.2 and a visible light camera 2.3, and the infrared camera 2.2 and the visible light camera 2.3 are both connected to the edge artificial intelligence processor 1 through a camera selection switch 2.1.
As shown in fig. 3, the camera selection switch 2.1 includes 14 switch chips 2.1.1, a signal input end of each switch chip 2.1.1 is connected to a switch signal output end of the edge artificial intelligence processor 1, an image signal collecting end of each switch chip 2.1.1 is connected to the infrared camera 2.2 and the visible light camera 2.3, respectively, and is configured to turn on the infrared camera 2.2 or the visible light camera 2.3 to collect an image according to a received switch control signal of the edge artificial intelligence processor 1, and receive collected image data, and a signal output end of each switch chip 2.1.1 is connected to a signal input end of the edge artificial intelligence processor 1, respectively, and is configured to transmit the received image data to the edge artificial intelligence processor 1.
In the embodiment of the invention:
a WIFI module: the model of ESP8266, ESP32 or ESP-01 can be selected;
edge artificial intelligence processor: the processor with the model number of K210 or V831 or NVIDIA Jeston Nano can be selected;
an infrared camera: the camera with the model number of FlirLEPTON3.5 or T2L256 or MLX90641 can be selected;
visible light camera: the model of the camera can be GC0308, OV2640 or OV 7725;
an inertial navigation module: the model of the module can be selected as MPU6050, MPU9250 or CE118G 19;
the lightweight embedded local self-learning classification model comprises the following steps: the algorithm can be selected as YOLOv3 or R-CNN or SPP-Net model;
a switch chip: the chip with the model number of BL1551 can be selected.
As shown in fig. 4, the inspection method of the infrared and visible light-based intelligent cable trench inspection system of the present invention includes the following steps:
1) judging whether the cable trench to be inspected currently is the first inspection, if so, entering the step 2); if not, entering the step 3);
2) initializing a lightweight embedded local self-learning classification model: acquiring cable trench image data, and training a lightweight embedded local self-learning classification model by using the acquired cable trench image data; as shown in fig. 5, the method specifically includes:
(2.1) the edge artificial intelligence processor 1 which is internally provided with the lightweight embedded local self-learning classification model sends a model updating instruction to an upper computer through a WIFI module 6, and after the upper computer receives the model updating instruction, the upper computer appoints a mark number of a newly-built local class and sends the mark number of the newly-built local class to the edge artificial intelligence processor 1;
(2.2) storing the mark number of the newly-built local class in the edge artificial intelligence processor 1 into the SD card 4;
(2.3) the edge artificial intelligence processor 1 acquires cable trench image data through the camera module 2, stores the cable trench image data in the SD card 4, trains the lightweight embedded local self-learning classification model by using the acquired cable trench image data until the identification rate of the embedded local self-learning classification model is greater than 95%;
and (2.4) after finishing training the lightweight embedded local self-learning classification model, automatically sending a stage ending instruction to the upper computer by the edge artificial intelligence processor 1 and the mark number information of the lightweight embedded local self-learning classification model used in the current edge artificial intelligence processor 1 to confirm that the currently used model is the trained lightweight embedded local self-learning classification model.
3) Initializing parameters: setting current polling task parameters;
the parameter initialization is that the edge artificial intelligence processor 1 receives a complete parameter packet from an upper computer through the WIFI module 6 and updates parameters according to the received parameter packet; the parameter packet contains: the system comprises a polling distance, the number of cable trench units, a polling speed, a WIFI image stream data transmission speed and a lens distortion correction coefficient; the inspection distance refers to the farthest inspection distance that the camera module 2 can reach, and the cable trench unit is the minimum unit for dividing the inspection area in the cable trench.
4) Carrying out major potential safety hazard troubleshooting and routing inspection, and circularly carrying out cable trench plugging detection and cable temperature and cable skin damage detection until a preset routing inspection distance is reached; the method comprises the steps of comprehensively evaluating the environment of the cable duct and the cable, and checking whether the condition of major potential safety hazard exists. The method comprises the following steps: circularly performing cable trench plugging detection, cable temperature and cable skin damage detection until a preset inspection distance is reached; the cable trench plugging detection is carried out on the basis that the visible light camera 2.3 is over against the image acquired by the geometric center of the current cable trench section, and the cable temperature and cable skin damage detection is carried out on the basis of the images acquired by the infrared camera 2.2 and the visible light camera 2.3.
As shown in fig. 6, the cable trench plugging detection includes:
(4.1) image distortion removal
The image distortion removal is that the visible light camera 2.3 can be interfered by barrel distortion when collecting images, and the following adjustment formula is used for carrying out image distortion removal on the cable trench section images collected by the visible light camera 2.3:
Figure BDA0003127749820000041
wherein x0、y0Is the original position of the distortion point in the image, x, y are the corrected positions, r is the distance between the distortion point and the image center point, the barrel distortion degree K followsThe expression of barrel distortion degree K is different from the reason of mechanical installation for different cameras, and a curve needs to be fitted by a concentric circle method, so that a K-r relational expression, K, is determined in an infinite series form1、k2、k3The coefficients of the first three terms in the infinite number expression are called lens distortion correction coefficients and are set in the parameter initialization process;
(4.2) image segmentation
The image segmentation is to execute a super-pixel-based fast fuzzy clustering image segmentation algorithm twice on the undistorted image, and to perform fuzzy segmentation to obtain three image areas. Specifically, an image segmentation algorithm is executed for the first time, a weighting index of the algorithm is set, and a ground area and a non-ground area are segmented, wherein the non-ground area comprises cables, a cable rack, left and right walls and a ceiling; executing an image segmentation algorithm for the second time, changing the weighting index of the algorithm, segmenting the ground area, and segmenting a foreign matter area and a road area for inspection driving;
(4.3) traveling situation judgment
The travel situation determination is performed on a road area for inspection travel obtained by division, and there are three situations:
(a) blocking the road area for routing inspection driving, wherein the road area presents the characteristics of a squat trapezoid in the image;
(b) under the straight-going condition, the road area for routing inspection driving presents the characteristic of a thin and high trapezoid in the image;
(c) turning left and right, wherein the road area for routing inspection driving presents the characteristic of an irregular hexagon in the image;
when the conditions of the type (a) occur, the edge artificial intelligence processor 1 immediately sends a help signal to an upper computer, seeks for manual intervention, judges whether a front routing inspection path is blocked or not, if so, finishes routing inspection, and otherwise, continues routing inspection after the routing inspection path is determined again manually;
when the situations of the type (b) and the type (c) occur, routing inspection travel path planning is carried out, a road area image for routing inspection travel is converted into a top view through a bird's-eye perspective transformation algorithm, a local path planning algorithm based on a fast expansion random tree is used for the top view, and the planned routing inspection travel path is generated on the top view;
(4.4) foreign matter Classification
After the advancing routing inspection advancing path is determined, classifying the foreign matters in the divided foreign matter areas; specifically, pixel information in a corresponding range of a foreign matter region is extracted from a distortion-removed image, the pixel information is subjected to prediction classification through a trained lightweight embedded local self-learning classification model to obtain prediction and grading information of foreign matter classification, the foreign matter classification information and the foreign matter path length are marked in an inspection travelling path, and the length of the marked foreign matter path is the same as the longitudinal pixel length of the foreign matter;
(4.5) recording the actual travel path
The actual patrol route is recorded, based on the planned patrol route in the plan view, the planned patrol route is corrected by integrating the acceleration information obtained from the inertial navigation module 5to obtain the length information of the route and the course angle obtained from the inertial navigation module 5to obtain the actual patrol route, and the actual patrol route is stored in the SD card 4 in a stacking mode and is used as an address for searching event mark data in cable trench detection;
(4.6) after the foreign matter detection is finished, the camera module 2 faces the direction of the routing inspection cable duct, the positions of the cable in the visual field belong to the left and right positions in the visual field of the foreign matter detection image, and the camera module 2 rotates to the left area or the right area by a fixed angle, so that the camera area is opposite to the cable;
as shown in fig. 7, the cable temperature and cable skin damage detection is to use a trained lightweight embedded local self-learning classification model to respectively process an RGB color image acquired by a visible light camera 2.3 and an infrared image acquired by an infrared camera 2.2, and specifically includes:
(4.7) visual image preprocessing
RGB image data collected by a visible light camera 2.3 facing a cable to be inspected is used as a data source, and RGB conversion gray level processing is carried out through a psychology conversion formula with 16-bit precision to obtain a gray level image of an original image:
Gray=(R*19595+G*38469+B*7472)>>16 (2)
wherein Gray is the conversion Gray value of each pixel point, R, G, B is the RGB information of each pixel point, in order to avoid using floating point operation, here through the optimization algorithm of tail removing method, the weighting coefficient multiplies 2^16 and approximately converts into integer; i.e. 19595, 38469, 7472 in the equation, the last 16 bits of mantissas are removed after the Gray is computed from the integer data.
Calculating the average gray scale of each frame of image by traversing the conversion gray scale values of all pixel points, and comparing the average gray scale with the set average gray scale prestored in the SRAM of the edge artificial intelligence processor 1 to obtain the output voltage of the light supplementing lamp 3, thereby realizing dynamic light supplementing;
meanwhile, image enhancement is carried out on RGB image data, specifically, an RGB image is converted into an HSI image, histogram equalization is carried out on an I (brightness) channel, image contrast is enhanced, the enhanced HSI image is converted into the RGB image, and fog-shaped image interference caused by smoke dust in a cable duct is eliminated.
And (3) carrying out distortion removal on the enhanced RGB image by adopting the method which is the same as the image distortion removal in the (4.1) step in the cable trench plugging detection to obtain a preprocessed image.
Performing RGB (red, green and blue) conversion gray level processing on the preprocessed image by adopting a method of formula (2), outputting a gray level image, and temporarily storing the gray level image in an internal memory SRAM (static random access memory) for subsequent image identification;
(4.8) Cable Damage identification
Inputting the preprocessed image into a trained lightweight embedded local self-learning classification model, extracting sub-prediction frames with the same damage characteristics at three convolution levels with different steps, and outputting prediction scoring values corresponding to the sub-prediction frames through a full connection layer.
And carrying out weighted summation on the sub-prediction frames and the corresponding prediction score values to generate a damage prediction frame and a damage characteristic confidence value corresponding to the damage prediction frame, wherein the damage characteristic confidence value is represented by the probability that the damage judgment in the damage prediction frame is correct.
And further processing the generated damage prediction frame, and deleting redundant damage prediction frames. And performing threshold filtering on the generated damage prediction frame, wherein the threshold filtering is to compare the damage characteristic confidence value with a set judgment threshold, if the damage characteristic confidence value is smaller than the threshold, deleting the corresponding damage prediction frame, and otherwise, keeping the corresponding damage prediction frame.
And then, applying a plurality of classified non-maximum value inhibition methods to the reserved damage prediction frames, only reserving one damage prediction frame with the maximum damage characteristic confidence value for the same damage characteristic, and deleting the rest damage prediction frames to generate the RGB image with the damage prediction frame mark.
(4.9) image data fusion
The infrared image data in the field angle range are acquired by the infrared camera 2.2. Removing noise from the infrared image through Gaussian smoothing, respectively carrying out edge detection on a gray level image stored in an SRAM in image preprocessing and the denoised infrared image by using a Canny edge detection algorithm, carrying out image maximum likelihood matching by using a key point matching method, aligning the edges of the two images, obtaining a mapping relation between the gray level image and the infrared image, completing edge matching, and determining a coincidence region of the infrared image and the preprocessed RGB image.
And processing the infrared image to obtain an image representing the temperature change rate. Specifically, a first-order gradient is solved for each pixel point of the infrared image by using a Sobel operator, the infrared image is converted into an image representing the temperature change rate, and each pixel point comprises the following two parameters:
Figure BDA0003127749820000061
Figure BDA0003127749820000062
wherein G isx、GyThe difference approximate value of the adjacent pixels in the horizontal and vertical directions of each pixel point in the infrared image data is calculated, and theta is a direction angleG is the mode length of the gradient, namely the temperature change rate.
And after an image representing the temperature change rate is obtained, adjusting the resolution of the image. And taking the resolution of the infrared image and the RGB image in the overlapped area as input parameters, and obtaining a temperature change rate image with the same resolution as the RGB image by a bilinear interpolation method.
And carrying out variable temperature frame extraction on the image which represents the temperature change rate after resolution adjustment. Connecting pixel points of which the temperature change rate G is greater than a set change rate threshold value in the image to form a frame of a temperature mutation area, namely a temperature change frame;
and carrying out data fusion on the temperature change rate image with the temperature change frame mark and the RGB image with the damage prediction frame mark generated in the step 4.8 to form a fused image. And re-dividing the feature extraction area based on the two frames of the damage prediction frame and the temperature change frame. According to the intersection relation between the prediction frame and the temperature-changing frame, dividing the fused image into the following four areas: the prediction method comprises the following steps of obtaining a region where a prediction frame and a temperature-changing frame intersect, obtaining a region contained in the temperature-changing frame in the region where the temperature-changing frame does not intersect with the prediction frame, obtaining a region contained in the prediction frame in the region where the temperature-changing frame does not intersect with the prediction frame, and obtaining a region not contained in the prediction frame and the temperature-changing frame.
And assigning the priority of the four areas:
Figure BDA0003127749820000071
the intersection area of the prediction frame and the temperature change frame is as follows: 0 priority;
Figure BDA0003127749820000072
the temperature change frame in the region where the temperature change frame and the prediction frame do not intersect comprises the following regions: 1 priority level;
Figure BDA0003127749820000073
the prediction frame in the region where the temperature changing frame and the prediction frame do not intersect comprises the following regions: 2, priority level;
Figure BDA0003127749820000074
regions not contained by the prediction box and the temperature-varying box: 3, priority level;
wherein 0 has the highest priority and 3 has the lowest priority;
(4.10) abnormal situation decision
And (4) processing the abnormal situation decision of the four areas generated in the step 4.9 according to the priority order. The logic for the abnormal situation decision is as follows:
Figure BDA0003127749820000075
0 priority: if the temperature abnormality of the cable and the damage of the surface of the cable exist at the same time, high-risk alarm information and position information are sent immediately, all routing inspection works are suspended, and manual routing inspection is waited;
Figure BDA0003127749820000076
1, priority: if the cable temperature is abnormal and the cable temperature is marked as a potential safety hazard point, sending non-high-risk reminding information to an upper computer, waiting whether the polling personnel replies for remote image observation or not, and if the polling personnel does not reply within a set time, recording the event in an actual polling advancing path and marking the event as a class I event;
Figure BDA0003127749820000077
2, priority: if the cable skin is damaged, recording the event in an actual routing inspection advancing path, and marking the event as a class II event;
Figure BDA0003127749820000078
3, priority: no treatment is performed.
And further processing the type I events and the type II events in subsequent potential safety hazard fixed point fault detection.
(4.11) image backup
After the abnormal situation decision is completed, the fused image with the temperature changing frame and the prediction frame is backed up to the SD card 4, the I-type event or the II-type event which is not observed manually in the step 4.10 is marked in the fused image, and the fused image is sent to the upper computer through the WIFI.
5) Potential safety hazard fixed point fault detection: manually screening to determine potential safety hazards, and then performing fixed-point fault detection through the inspection system; the method comprises the steps of returning according to routing inspection advancing path information generated in cable trench plugging detection, reading the path information from a stack stored in an SD card 4 by an edge artificial intelligence processor 1, triggering interruption if the extracted path information contains an event mark which needs to be detected and is determined by an inspector, pausing routing backtracking, controlling a camera module 2 to shoot an image at the position, executing step 4) to detect the cable temperature and the cable skin damage, sending the processed image to an upper computer, continuing returning after the inspector determines, and repeating the fixed point detection process until the starting point is returned.
Specifically, when the system judges that the running distance of the system reaches the preset inspection distance according to the information of the inertial navigation module 5, the system stops continuously detecting forwards, sends prompt information to the upper computer and requires an inspector to confirm the start of follow-up work. After the inspection personnel agree with the system request, the inspection personnel manually mark all the I-type events and interested II-type events according to the I-type events or II-type event marking information sent by the system in the process of inspecting the major potential safety hazard, and inform the system of carrying out fixed-point detection on the events. And after receiving the fixed point detection information, the system retreats to the starting point along the route in time, when an event needing fixed point detection is met, the retreating action is suspended, the inspection personnel is informed to observe, after the inspection personnel determines, the retreating action is continued, and the fixed point detection process is repeated until the starting point is returned.
6) Generating a patrol log report: the edge artificial intelligence processor 1 extracts and packs potential safety hazard troubleshooting and processing conditions thereof generated in the whole polling process according to an event sequence, generates a polling log report and stores the polling log report into the SD card 4.

Claims (10)

1. The utility model provides a cable pit intelligence system of patrolling and examining based on infrared and visible light, a serial communication port, including marginal artificial intelligence treater (1), respectively with marginal artificial intelligence treater (1) camera module (2), light filling lamp (3), SD card (4) that are connected, be used for leading module (5) and WIFI module (6), camera module (2) be used for gathering cable pit image data and send into marginal artificial intelligence treater (1) to save to SD card (4) through marginal artificial intelligence treater (1), light filling lamp (3) are used for the dynamic light filling, be used for leading module (5) to acquire the data of acceleration and course angle and send into marginal artificial intelligence treater (1), marginal artificial intelligence treater (1) pass through WIFI module (6) and host computer communicate.
2. The infrared and visible light-based intelligent cable trench inspection system according to claim 1, wherein the camera module (2) comprises an infrared camera (2.2) and a visible light camera (2.3), and the infrared camera (2.2) and the visible light camera (2.3) are connected with the edge artificial intelligence processor (1) through a camera selection switch (2.1).
3. The intelligent infrared and visible light-based cable trench inspection system according to claim 1, it is characterized in that the camera selection switch (2.1) comprises 14 switch chips (2.1.1), the signal input end of each switch chip (2.1.1) is connected with the switch signal output end of the edge artificial intelligence processor (1), the image signal acquisition end of each switch chip (2.1.1) is respectively connected with the infrared camera (2.2) and the visible light camera (2.3), is used for correspondingly starting an infrared camera (2.2) or a visible light camera (2.3) to collect images according to the received switch control signal of the edge artificial intelligence processor (1), and receives the collected image data, the signal output end of each switch chip (2.1.1) is respectively connected with the signal input end of the edge artificial intelligence processor (1), for transferring the received image data to an edge artificial intelligence processor (1).
4. The inspection method of the infrared and visible light based intelligent cable trench inspection system according to claim 1, comprising the following steps:
1) judging whether the cable trench to be inspected currently is the first inspection, if so, entering the step 2); if not, entering the step 3);
2) initializing a lightweight embedded local self-learning classification model, comprising: acquiring cable trench image data, and training a lightweight embedded local self-learning classification model by using the acquired cable trench image data;
3) initializing parameters: setting current polling task parameters;
4) the inspection of great potential safety hazard is patrolled and examined, carries out the comprehensive assessment to cable pit environment and cable, and whether the condition that has great potential safety hazard of inspection includes: circularly performing cable trench plugging detection, cable temperature and cable skin damage detection until a preset inspection distance is reached; the cable trench plugging detection is carried out on the basis that a visible light camera (2.3) is over against an image acquired by the geometric center of the current cable trench section, and the cable temperature and cable skin damage detection is carried out on the basis of images acquired by an infrared camera (2.2) and the visible light camera (2.3);
5) potential safety hazard fixed point fault detection: manually screening to determine potential safety hazards, and then performing fixed-point fault detection through the inspection system;
6) generating a patrol log report: the edge artificial intelligence processor (1) extracts and packs potential safety hazard troubleshooting and processing conditions generated in the whole inspection process according to an event sequence, generates an inspection log report and stores the inspection log report into the SD card (4).
5. The inspection method according to claim 4, wherein the step 2) specifically includes:
(2.1) the edge artificial intelligence processor (1) which is internally provided with the lightweight embedded local self-learning classification model sends a model updating instruction to an upper computer through a WIFI module (6), and after the upper computer receives the model updating instruction, the upper computer appoints a mark number of a newly-built local class and sends the mark number of the newly-built local class to the edge artificial intelligence processor (1);
(2.2) storing the mark number of the newly built local class in the edge artificial intelligence processor (1) into the SD card (4);
(2.3) the edge artificial intelligence processor (1) acquires cable trench image data through the camera module (2), stores the cable trench image data in the SD card (4), trains the lightweight embedded local self-learning classification model by using the acquired cable trench image data until the identification rate of the embedded local self-learning classification model is greater than 95%;
and (2.4) after finishing training the lightweight embedded local self-learning classification model, the edge artificial intelligence processor (1) automatically sends a stage ending instruction to the upper computer and the mark number information of the lightweight embedded local self-learning classification model used in the current edge artificial intelligence processor (1), and confirms that the currently used model is the trained lightweight embedded local self-learning classification model.
6. The inspection method according to the claim 4, wherein the parameter initialization in the step 3) is that the edge artificial intelligence processor (1) receives a complete parameter packet from an upper computer through the WIFI module (6) and updates parameters according to the received parameter packet; the parameter packet contains: the system comprises a polling distance, the number of cable trench units, a polling speed, a WIFI image stream data transmission speed and a lens distortion correction coefficient; the inspection distance refers to the farthest inspection distance which can be reached by the camera module (2), and the cable duct unit is the minimum unit for dividing the inspection area in the cable duct.
7. The inspection method according to claim 4, wherein the cable trench plugging detection of the step 4) includes:
(4.1) image distortion removal
The image distortion removal is to remove the distortion of the cable trench section image collected by the visible light camera (2.3) by using the following regulation formula:
Figure FDA0003127749810000021
wherein x0、y0Is the original position of the distortion point in the image, x, y are the corrected positions, r is the distance from the distortion point to the center point of the image, k1、k2、k3The coefficients of the first three terms in the infinite number expression are called lens distortion correction coefficients and are set in the parameter initialization process;
(4.2) image segmentation
The image segmentation is to perform a fast fuzzy clustering image segmentation algorithm based on superpixels twice on the image after distortion removal, and segment three image areas in a fuzzy manner, specifically, to perform the image segmentation algorithm for the first time, set a weighting index of the algorithm, and segment a ground area and a non-ground area, wherein the non-ground area comprises cables, a cable rack, left and right walls and a ceiling; executing an image segmentation algorithm for the second time, changing the weighting index of the algorithm, segmenting the ground area, and segmenting a foreign matter area and a road area for inspection driving;
(4.3) traveling situation judgment
The travel situation determination is performed on a road area for travel obtained by division, and there are three situations:
(a) blocking the road area for routing inspection driving, wherein the road area presents the characteristics of a squat trapezoid in the image;
(b) under the straight-going condition, the road area for routing inspection driving presents the characteristic of a thin and high trapezoid in the image;
(c) turning left and right, wherein the road area for routing inspection driving presents the characteristic of an irregular hexagon in the image;
when the conditions of the type (a) occur, the edge artificial intelligence processor (1) immediately sends a help signal to an upper computer, seeks for artificial intervention, judges whether a front routing inspection path is blocked or not, if so, finishes routing inspection, and otherwise, continues routing inspection after the routing inspection path is determined again manually;
when the situations of the type (b) and the type (c) occur, routing inspection travel path planning is carried out, a road area image for routing inspection travel is converted into a top view through a bird's-eye perspective transformation algorithm, a local path planning algorithm based on a fast expansion random tree is used for the top view, and the planned routing inspection travel path is generated on the top view;
(4.4) foreign matter Classification
After the advancing routing inspection advancing path is determined, classifying the foreign matters in the divided foreign matter areas; specifically, pixel information in a corresponding range of a foreign matter region is extracted from a distortion-removed image, the pixel information is subjected to prediction classification through a trained lightweight embedded local self-learning classification model to obtain prediction and grading information of foreign matter classification, the foreign matter classification information and the foreign matter path length are marked in an inspection travelling path, and the length of the marked foreign matter path is the same as the longitudinal pixel length of the foreign matter;
(4.5) recording the actual travel path
The actual patrol travelling path is recorded, on the basis of a planned patrol travelling path in a plan view, the planned patrol travelling path is corrected by integrating acceleration information acquired from the inertial navigation module (5) to obtain path length information and a course angle acquired from the inertial navigation module (5), so that the actual patrol travelling path is obtained and stored in an SD card (4) in a stacking mode to be used as an address for searching event marker data in cable trench detection;
(4.6) after completing the foreign matter detection, the camera module (2) faces the direction of the routing inspection cable duct, the position of the cable in the visual field belongs to the left and right positions in the visual field of the foreign matter detection image, and the camera module (2) rotates to the left or right area by a fixed angle, so that the camera area is opposite to the cable.
8. The inspection method according to claim 4, wherein the cable temperature and cable skin damage detection in the step 4) respectively processes the RGB color image acquired by the visible light camera (2.3) and the infrared image acquired by the infrared camera (2.2) by using a trained lightweight embedded local self-learning classification model, and specifically comprises:
(4.7) visual image preprocessing
RGB image data collected by a visible light camera (2.3) facing to a cable to be inspected are used as a data source, and RGB conversion gray level processing is carried out through a psychology conversion formula with 16-bit precision to obtain a gray level image of an original image:
Gray=(R*19595+G*38469+B*7472)>>16 (2)
wherein Gray is the conversion Gray value of each pixel point, R, G, B is the RGB information of each pixel point, in order to avoid using floating point operation, here through the optimization algorithm of tail removing method, the weighting coefficient multiplies 2^16 and approximately converts into integer;
calculating the average gray level of each frame of image by traversing the conversion gray levels of all pixel points, and comparing the average gray level with the set average gray level prestored in an SRAM (static random access memory) in the edge artificial intelligence processor (1) to obtain the output voltage of the light supplementing lamp (3), thereby realizing dynamic light supplementing;
simultaneously, image enhancement is carried out on RGB image data, specifically, the RGB image is converted into an HSI image, histogram equalization is carried out on an I channel, image contrast is enhanced, the enhanced HSI image is converted into the RGB image, and fog-shaped image interference caused by smoke dust in a cable trench is eliminated;
and (3) carrying out distortion removal on the enhanced RGB image by adopting the method which is the same as the image distortion removal in the (4.1) step in the cable trench plugging detection to obtain a preprocessed image.
Performing RGB (red, green and blue) conversion gray level processing on the preprocessed image by adopting a method of formula (2), outputting a gray level image, and temporarily storing the gray level image in an internal memory SRAM (static random access memory) for subsequent image identification;
(4.8) Cable Damage identification
Inputting the preprocessed image into a trained lightweight embedded local self-learning classification model, extracting sub-prediction frames with the same damage characteristic at three convolution levels with different steps, and outputting prediction scoring values corresponding to the sub-prediction frames through a full connection layer;
carrying out weighted summation on the sub-prediction frames and the corresponding prediction score values to generate a damage prediction frame and a damage characteristic confidence value corresponding to the damage prediction frame, wherein the damage characteristic confidence value is represented by the probability that damage judgment in the damage prediction frame is correct;
performing threshold filtering on the generated damage prediction frame, wherein the threshold filtering is to compare the damage characteristic confidence value with a set judgment threshold, if the damage characteristic confidence value is smaller than the threshold, deleting the corresponding damage prediction frame, and otherwise, keeping the corresponding damage prediction frame;
secondly, applying a plurality of classified non-maximum value inhibition methods to the reserved damage prediction frames, only reserving one damage prediction frame with the maximum damage characteristic confidence value for the same damage characteristic, and deleting the rest damage prediction frames to generate an RGB image with damage prediction frame marks;
(4.9) image data fusion
Acquiring infrared image data in a field angle range through an infrared camera (2.2), removing noise of the infrared image through Gaussian smoothing, respectively carrying out edge detection on a gray level image stored in an SRAM in image preprocessing and the de-noised infrared image by using a Canny edge detection algorithm, carrying out image maximum likelihood matching by using a key point matching method, aligning the edges of the two images to obtain a mapping relation between the gray level image and the infrared image, completing edge matching, and determining a superposition area of the infrared image and the preprocessed RGB image;
processing the infrared image to obtain an image representing the temperature change rate, specifically, solving a first-order gradient for each pixel point of the infrared image by using a Sobel operator, converting the infrared image into the image representing the temperature change rate, wherein each pixel point comprises the following two parameters:
Figure FDA0003127749810000041
Figure FDA0003127749810000042
wherein G isx、GyThe difference approximate value of adjacent pixels in the transverse direction and the longitudinal direction of each pixel point in the infrared image data is obtained, theta is a direction angle, G is a modulus length of gradient, and the modulus length is the temperature change rate;
extracting a variable temperature frame of the image which is subjected to resolution adjustment and represents the temperature change rate, and connecting pixel points of the image, of which the temperature change rate G is greater than a set change rate threshold value, to form a frame of a temperature mutation area, namely the variable temperature frame;
fusing the temperature change rate image with the temperature change frame mark and the RGB image with the damage prediction frame mark generated in the step (4.8) to form a fused image; based on the two frames of the damage prediction frame and the temperature change frame, the characteristic extraction area is divided again; according to the intersection relation between the prediction frame and the temperature-changing frame, dividing the fused image into the following four areas: the prediction method comprises the following steps of obtaining a region where a prediction frame and a temperature-changing frame intersect, obtaining a region contained in the temperature-changing frame in the region where the temperature-changing frame does not intersect with the prediction frame, obtaining a region contained in the prediction frame in the region where the temperature-changing frame does not intersect with the prediction frame, and obtaining a region not contained in the prediction frame and the temperature-changing frame.
And assigning the priority of the four areas:
the intersection area of the prediction frame and the temperature change frame is as follows: 0 priority;
the temperature change frame in the region where the temperature change frame and the prediction frame do not intersect comprises the following regions: 1 priority level;
the prediction frame in the region where the temperature changing frame and the prediction frame do not intersect comprises the following regions: 2, priority level;
regions not contained by the prediction box and the temperature-varying box: 3, priority level;
wherein 0 has the highest priority and 3 has the lowest priority;
(4.10) abnormal situation decision
Processing the abnormal situation decision of the four areas generated in the step (4.9) according to the priority sequence;
(4.11) image backup
After the abnormal situation decision is completed, the fused image with the temperature changing frame and the prediction frame is backed up to the SD card 4, the I-type event or the II-type event which is not observed manually in the step (4.10) is marked in the fused image, and the fused image is sent to an upper computer through WIFI.
9. The inspection method according to the claim 8, wherein in the step (4.10), the damage logic processing mode for the areas with different priorities is as follows:
0 priority: if the temperature abnormality of the cable and the damage of the surface of the cable exist at the same time, high-risk alarm information and position information are sent immediately, all routing inspection works are suspended, and manual routing inspection is waited;
1, priority: if the cable temperature is abnormal and the cable temperature is marked as a potential safety hazard point, sending non-high-risk reminding information to an upper computer, waiting whether the polling personnel replies for remote image observation or not, and if the polling personnel does not reply within a set time, recording the event in an actual polling advancing path and marking the event as a class I event;
2, priority: if the cable skin is damaged, recording the event in an actual routing inspection advancing path, and marking the event as a class II event;
3, priority: no treatment is performed.
10. The inspection method according to claim 4, wherein the step 5) is to return according to the inspection travel path information generated in the cable trench plugging detection, the edge artificial intelligence processor (1) reads the path information from the stack stored in the SD card (4), if the extracted path information contains an event mark which needs to be detected by the inspection staff, the interruption is triggered, the path backtracking is suspended, the camera module (2) is controlled to shoot the image at the position, the step 4) is executed to detect the cable temperature and the cable skin damage, the processed image is sent to an upper computer, after the inspection staff is determined, the return action is continued, and the fixed point detection process is repeated until the starting point returns.
CN202110695138.8A 2021-06-22 2021-06-22 Cable trench intelligent inspection system and method based on infrared and visible light Active CN113420810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110695138.8A CN113420810B (en) 2021-06-22 2021-06-22 Cable trench intelligent inspection system and method based on infrared and visible light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110695138.8A CN113420810B (en) 2021-06-22 2021-06-22 Cable trench intelligent inspection system and method based on infrared and visible light

Publications (2)

Publication Number Publication Date
CN113420810A true CN113420810A (en) 2021-09-21
CN113420810B CN113420810B (en) 2022-08-26

Family

ID=77716179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110695138.8A Active CN113420810B (en) 2021-06-22 2021-06-22 Cable trench intelligent inspection system and method based on infrared and visible light

Country Status (1)

Country Link
CN (1) CN113420810B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095321A (en) * 2021-04-22 2021-07-09 武汉菲舍控制技术有限公司 Roller bearing temperature measurement and fault early warning method and device for belt conveyor
CN114264661A (en) * 2021-12-06 2022-04-01 浙江大学台州研究院 Definition self-adaptive coiled material detection method, device and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
CN108037133A (en) * 2017-12-27 2018-05-15 武汉市智勤创亿信息技术股份有限公司 A kind of power equipments defect intelligent identification Method and its system based on unmanned plane inspection image
CN110246130A (en) * 2019-06-21 2019-09-17 中国民航大学 Based on infrared and visible images data fusion airfield pavement crack detection method
CN110340902A (en) * 2019-07-03 2019-10-18 国网安徽省电力有限公司电力科学研究院 A kind of cable duct crusing robot, system and method for inspecting
CN110850723A (en) * 2019-12-02 2020-02-28 西安科技大学 Fault diagnosis and positioning method based on transformer substation inspection robot system
CN111400425A (en) * 2020-03-18 2020-07-10 北京嘀嘀无限科技发展有限公司 Method and system for automatically optimizing and selecting path
CN111563457A (en) * 2019-12-31 2020-08-21 成都理工大学 Road scene segmentation method for unmanned automobile
CN111721279A (en) * 2019-03-21 2020-09-29 国网陕西省电力公司商洛供电公司 Tail end path navigation method suitable for power transmission inspection work
CN112001260A (en) * 2020-07-28 2020-11-27 国网湖南省电力有限公司 Cable trench fault detection method based on infrared and visible light image fusion
CN112540607A (en) * 2020-04-03 2021-03-23 深圳优地科技有限公司 Path planning method and device, electronic equipment and storage medium
CN112683911A (en) * 2020-11-17 2021-04-20 国网山东省电力公司济南供电公司 Cable tunnel intelligence unmanned aerial vehicle inspection check out test set with high stability

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400392A (en) * 2013-08-19 2013-11-20 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on inspection robot in transformer substation
CN108037133A (en) * 2017-12-27 2018-05-15 武汉市智勤创亿信息技术股份有限公司 A kind of power equipments defect intelligent identification Method and its system based on unmanned plane inspection image
CN111721279A (en) * 2019-03-21 2020-09-29 国网陕西省电力公司商洛供电公司 Tail end path navigation method suitable for power transmission inspection work
CN110246130A (en) * 2019-06-21 2019-09-17 中国民航大学 Based on infrared and visible images data fusion airfield pavement crack detection method
CN110340902A (en) * 2019-07-03 2019-10-18 国网安徽省电力有限公司电力科学研究院 A kind of cable duct crusing robot, system and method for inspecting
CN110850723A (en) * 2019-12-02 2020-02-28 西安科技大学 Fault diagnosis and positioning method based on transformer substation inspection robot system
CN111563457A (en) * 2019-12-31 2020-08-21 成都理工大学 Road scene segmentation method for unmanned automobile
CN111400425A (en) * 2020-03-18 2020-07-10 北京嘀嘀无限科技发展有限公司 Method and system for automatically optimizing and selecting path
CN112540607A (en) * 2020-04-03 2021-03-23 深圳优地科技有限公司 Path planning method and device, electronic equipment and storage medium
CN112001260A (en) * 2020-07-28 2020-11-27 国网湖南省电力有限公司 Cable trench fault detection method based on infrared and visible light image fusion
CN112683911A (en) * 2020-11-17 2021-04-20 国网山东省电力公司济南供电公司 Cable tunnel intelligence unmanned aerial vehicle inspection check out test set with high stability

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕培强等: ""自适应移动式巡视识别在配电站室智能巡检中的应用"", 《电力与能源》 *
黄荣辉等: ""基于 K210 的电缆沟视觉巡检机器蛇开发"", 《工业控制计算机》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095321A (en) * 2021-04-22 2021-07-09 武汉菲舍控制技术有限公司 Roller bearing temperature measurement and fault early warning method and device for belt conveyor
CN113095321B (en) * 2021-04-22 2023-07-11 武汉菲舍控制技术有限公司 Roller bearing temperature measurement and fault early warning method and device for belt conveyor
CN114264661A (en) * 2021-12-06 2022-04-01 浙江大学台州研究院 Definition self-adaptive coiled material detection method, device and system

Also Published As

Publication number Publication date
CN113420810B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
KR102008973B1 (en) Apparatus and Method for Detection defect of sewer pipe based on Deep Learning
CN108230344B (en) Automatic identification method for tunnel water leakage diseases
CN113420810B (en) Cable trench intelligent inspection system and method based on infrared and visible light
CN107154040A (en) A kind of tunnel-liner surface image crack detection method
CN105373135A (en) Method and system for guiding airplane docking and identifying airplane type based on machine vision
Phung et al. Automatic crack detection in built infrastructure using unmanned aerial vehicles
CN113379712B (en) Steel bridge bolt disease detection method and system based on computer vision
CN113808098A (en) Road disease identification method and device, electronic equipment and readable storage medium
CN113436157A (en) Vehicle-mounted image identification method for pantograph fault
CN108961276B (en) Distribution line inspection data automatic acquisition method and system based on visual servo
CN106447699A (en) High-speed rail overhead contact line equipment object detection and tracking method based on Kalman filtering
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN112115770A (en) Method and system for identifying autonomous inspection defects of unmanned aerial vehicle of overhead line
CN115578326A (en) Road disease identification method, system, equipment and storage medium
CN114882410A (en) Tunnel ceiling lamp fault detection method and system based on improved positioning loss function
CN112686120B (en) Power transmission line abnormity detection method based on unmanned aerial vehicle aerial image
CN116912805B (en) Well lid abnormity intelligent detection and identification method and system based on unmanned sweeping vehicle
CN112508911A (en) Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof
CN116958837A (en) Municipal facilities fault detection system based on unmanned aerial vehicle
CN116740833A (en) Line inspection and card punching method based on unmanned aerial vehicle
JP2020147129A (en) Overhead power line metal fitting detection device and overhead power line metal fitting detection method
CN114663672A (en) Method and system for detecting corrosion of steel member of power transmission line tower
KR20220075999A (en) Pothole detection device and method based on deep learning
CN111767815A (en) Tunnel water leakage identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant