CN114299308A - Intelligent detection system and method for damaged road guardrail - Google Patents

Intelligent detection system and method for damaged road guardrail Download PDF

Info

Publication number
CN114299308A
CN114299308A CN202111559615.4A CN202111559615A CN114299308A CN 114299308 A CN114299308 A CN 114299308A CN 202111559615 A CN202111559615 A CN 202111559615A CN 114299308 A CN114299308 A CN 114299308A
Authority
CN
China
Prior art keywords
guardrail
control chip
fpga control
picture
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111559615.4A
Other languages
Chinese (zh)
Inventor
胡增
江大白
杨坤龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Applied Technology Co Ltd
Original Assignee
China Applied Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Applied Technology Co Ltd filed Critical China Applied Technology Co Ltd
Priority to CN202111559615.4A priority Critical patent/CN114299308A/en
Publication of CN114299308A publication Critical patent/CN114299308A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses an intelligent detection system and a method for a damaged road guardrail, the system utilizes an FPGA control chip, a picture of a guardrail to be detected is input into the FPGA control chip through a camera module, a static RGB data set and an optical flow data set are respectively calculated, displacement change is converted into voltage change output, the voltage change output is sent into the FPGA control chip after passing through a pulse signal conditioning circuit, an encoder outputs the deformation rotation angle of the guardrail in a pulse form, the voltage change rotation angle is transmitted into the FPGA control chip after passing through the conditioning circuit, the FPGA control chip judges whether the displacement signal of the picture exceeds a threshold value after processing the acquired data, and a detection result is displayed on a liquid crystal display, the acquired guardrail picture is sent to the FPGA control chip through a ZigBee module for processing, the method has high automation degree, saves the cost of manual detection, and can detect the damaged area of the guardrail no matter how much, can meet the requirement of on-site rapid detection of guardrail qualification.

Description

Intelligent detection system and method for damaged road guardrail
Technical Field
The invention relates to the field of road guardrail detection, in particular to an intelligent detection system and method for a damaged road guardrail.
Background
The traffic guardrail on the urban road is a necessary measure in traffic construction, provides safety guarantee for people going out, and the existing guardrail is monotonous in structure and single in function. And without intelligence. The traffic guardrail is used as an isolation and anti-collision facility, is widely applied to traffic engineering construction of expressways, urban highways and the like, is one of traffic safety facilities, and plays a very important role in reducing damage of driving accidents. The guardrail net is an important road traffic infrastructure, is used as a matched project of a road system, and can seriously threaten the life and property safety of automobile operation and residents along the line once the guardrail net erected on two sides of a public line is stolen, lost or artificially damaged.
The traditional solutions are all manual inspection along the line, which is labor-consuming and time-consuming and cannot be found in time. At present, the existing research method only identifies whether the cement railings with thicker protective nets along the highway are intact, has the defects of poor adaptability, high rate of missing judgment and high rate of false judgment and the like, and simultaneously fails to effectively identify whether the protective fence nets with the largest area occupation ratio along the highway are intact, for example, the damaged area reaches 200cm2The guard rail net of (2) carries out correct identification. In addition, a perfectly unified guardrail data image processing system and method are not established in the research projects, and multiple paths of signals cannot be processed simultaneously.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an intelligent detection system and method for damaged road guardrails.
The technical scheme adopted by the invention is that the system is a set of image acquisition and processing system, and the system comprises a camera module, an analog signal conditioning circuit, a pulse signal conditioning circuit, an A/D conversion module, a ZigBee module and a display module;
the system utilizes the FPGA control chip, the picture of the guardrail to be detected is input into the FPGA control chip through the camera module, a static RGB data set and a light stream data set are respectively calculated, the displacement change is converted into the voltage change output, the voltage change output is sent into the FPGA control chip after passing through the pulse signal conditioning circuit, the encoder outputs the guardrail deformation rotating angle in a pulse form, the FPGA control chip is sent into the FPGA control chip after passing through the conditioning circuit, the FPGA control chip processes the acquired data, judges whether the displacement signal of the picture exceeds a threshold value or not, and displays the detection result on the liquid crystal display, the acquired guardrail picture is sent to the FPGA control chip through the ZigBee module and is processed.
Furthermore, the image of the guardrail to be detected is input into the FPGA control chip through the camera module, and an analog signal is converted into a digital quantity through the A/D conversion module and is transmitted into the FPGA control chip through the analog signal conditioning circuit; the pulse signal is sent to the FPGA control chip after passing through the pulse signal conditioning circuit, and the FPGA control chip processes the acquired data, displays the processed data on the OLCD display module and sends the processed data to the ZigBee module.
Furthermore, the analog signal conditioning circuit utilizes two-stage amplification, namely amplification, filtering and amplification; the differential amplifier circuit is used as a preceding stage amplifier circuit, the output signal of the pressure sensor is amplified by 20 times and filtered by a low pass filter, the high frequency interference is filtered, the amplified signal enters a secondary amplifier circuit and is amplified by 20 times, and finally the signal is output in the range of 0-5V, so that the interference of a common mode signal is eliminated.
Furthermore, the encoder outputs TTL pulse signals by utilizing a 360-degree rotating encoder with rated voltage of DC 5V, and the signals are shaped by a 74ls138 chip and then transmitted to an FPGA control chip; the OLCD display module utilizes an IIC/SPI liquid crystal serial port display with 1.3 inches; the camera module utilizes an HF899USB industrial high-definition camera to shoot at 135-degree wide angle and with distortion 1080P.
Furthermore, in the aspect of FPGA control chip software design, the input of picture and pressure parameters and the acquisition and processing of analog signals and pulse signals are carried out, and the analog signals are converted by an A/D converter carried by the FPGAEP4CE6F17C8N to obtain corresponding digital quantities; the pulse signals are counted by triggering external interruption of the FPGA control chip, and after calculation and processing and qualification judgment, the data are displayed on the OLCD and transmitted to the ZigBee module through a serial port.
The intelligent detection method for the damaged road guardrail comprises static input and static output, wherein the input is a group of continuous guardrail pictures, the output comprises two types of guardrail damage and non-guardrail damage, and the algorithm comprises the following steps:
step S1: inputting a guardrail picture, and respectively making a static RGB data set and an optical flow data set of the guardrail picture;
step S2: convolving the static RGB data set by using a spatial feature enhancement network to separate spatial dimension features of the guardrail picture;
step S3: convolving the optical flow data set, and extracting optical flow information from adjacent pictures as input so as to separate the time characteristics of the guardrail pictures;
step S4: merging the output of the separated spatial dimension characteristic with the output of the separated time characteristic;
step S5: and judging the displacement degree of the guardrail by using an optical flow method, wherein if the displacement degree exceeds 20%, the guardrail is damaged.
Further, the optical flow method judges the displacement degree of the guardrail by calculating the displacement of each pixel point of the picture of the guardrail shot by the camera module and the intact guardrail picture after the time delta t, and the constraint equation of the image is as shown in the formula:
A(i,j,k,t)=A(i+Δi,j+Δj,k+Δk,t+Δt),
a (i, j, k, t) represents the pixel at the location of time (i, j, k) t.
Further, the spatial feature enhancement network utilizesDouble convolution neural network structure of AlexNet, for VtAfter the guardrail picture at the moment is convoluted by a convolution neural network, obtaining a spatial characteristic output BtTo V pairt-1After the convolutional neural network convolution is carried out on the guardrail picture at the moment, spatial feature output B is obtainedt-1
Bt=(c1,...,cd)T
Figure BDA0003420262120000041
CDIs the output characteristic of the full link layer of the convolutional neural network, Bt-1Is calculated as follows:
Bt-1=(m1,...,md)T
Figure BDA0003420262120000042
by VtTemporal spatial feature minus Vt-1The spatial characteristics of the moment are different, namely different spatial characteristics of 2 pictures are obtained, and background interference is removed;
O′t=Relu(Bt-Bt-1)。
further, the output of the separated spatial dimension characteristic and the output of the separated temporal characteristic are fused to perform fusion on 2 pictures (V)t-1,Vt) Converting 2 pictures into optical flow pictures V by an optical flow methodt', then the optical flow picture Vt' convolution result B oft' output result O with spatial feature enhancement networktPerforming a fusion, wherein:
BO′t=Concat(B′t+Ot),
the RGB data set and the optical flow data set have corresponding relation, and the optical flow data set is calculated from front and back frames of the RGB data set.
The invention provides an intelligent detection system and method for a damaged road guardrail, which designs a guardrail picture signal acquisition and processing system taking an FPGA control chip as a core, utilizes an image optical flow method to judge the displacement degree of the guardrail, and if the displacement degree exceeds 20 percent, the guardrail is damaged.
Drawings
FIG. 1 is a block diagram of the present invention;
fig. 2 is a system architecture diagram of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments can be combined with each other without conflict, and the present application will be further described in detail with reference to the drawings and specific embodiments.
As shown in fig. 1, an intelligent detection system and method for a damaged road guardrail is a set of image acquisition and processing system, and the system comprises a camera module, an analog signal conditioning circuit, a pulse signal conditioning circuit, an a/D conversion module, a ZigBee module and a display module;
the system utilizes the FPGA control chip, the picture of the guardrail to be detected is input into the FPGA control chip through the camera module, a static RGB data set and a light stream data set are respectively calculated, the displacement change is converted into the voltage change output, the voltage change output is sent into the FPGA control chip after passing through the pulse signal conditioning circuit, the encoder outputs the guardrail deformation rotating angle in a pulse form, the FPGA control chip is sent into the FPGA control chip after passing through the conditioning circuit, the FPGA control chip processes the acquired data, judges whether the displacement signal of the picture exceeds a threshold value or not, and displays the detection result on the liquid crystal display, the acquired guardrail picture is sent to the FPGA control chip through the ZigBee module and is processed.
ZigBee is a wireless network protocol for low-speed short-distance transmission. The ZigBee protocol is, from bottom to top, a physical layer (PHY), a media access control layer (MAC), a Transport Layer (TL), a network layer (NWK), an application layer (APL), and the like. Wherein the physical layer and the medium access control layer comply with the specifications of the ieee802.15.4 standard.
ZigBee's characteristics, data transmission rate is low: 10 KB/sec-250 KB/sec, focused on low transmission applications. The power consumption is low: in the low power consumption standby mode, two common No. 5 batteries can be used for 6-24 months. The cost is low: the ZigBee has low data transmission rate and simple protocol, thereby greatly reducing the cost. The network capacity is large: the network can accommodate 65,000 devices. The time delay is short: typically the delay is between 15ms and 30 ms.
The image of the guardrail to be tested is input into the FPGA control chip through the camera module, and an analog signal is converted into a digital quantity through the A/D conversion module and is transmitted into the FPGA control chip through the analog signal conditioning circuit; the pulse signal is sent to the FPGA control chip after passing through the pulse signal conditioning circuit, and the FPGA control chip processes the acquired data, displays the processed data on the OLCD display module and sends the processed data to the ZigBee module.
The analog signal conditioning circuit utilizes two-stage amplification, namely amplification, filtering and amplification; the differential amplifier circuit is used as a preceding stage amplifier circuit, the output signal of the pressure sensor is amplified by 20 times and filtered by a low pass filter, the high frequency interference is filtered, the amplified signal enters a secondary amplifier circuit and is amplified by 20 times, and finally the signal is output in the range of 0-5V, so that the interference of a common mode signal is eliminated.
The encoder outputs TTL pulse signals by utilizing a 360-degree rotating encoder with rated voltage of DC 5V, and the signals are shaped by a 74ls138 chip and then transmitted to an FPGA control chip; the OLCD display module utilizes an IIC/SPI liquid crystal serial port display with 1.3 inches; the camera module utilizes an HF899USB industrial high-definition camera to shoot at 135-degree wide angle and with distortion 1080P.
In the aspect of FPGA control chip software design, picture and pressure parameter input and acquisition and processing of analog signals and pulse signals are carried out, and the analog signals are converted by an A/D converter carried by an FPGAEP4CE6F17C8N to obtain corresponding digital quantity; the pulse signals are counted by triggering external interruption of the FPGA control chip, and after calculation and processing and qualification judgment, the data are displayed on the OLCD and transmitted to the ZigBee module through a serial port.
As shown in fig. 2, an intelligent detection method for a damaged road guardrail comprises static input and static output, wherein the input is a group of continuous guardrail pictures, the output comprises two types of guardrail damage and non-guardrail damage, and the algorithm comprises the following steps:
step S1: inputting a guardrail picture, and respectively making a static RGB data set and an optical flow data set of the guardrail picture;
step S2: convolving the static RGB data set by using a spatial feature enhancement network to separate spatial dimension features of the guardrail picture;
step S3: convolving the optical flow data set, and extracting optical flow information from adjacent pictures as input so as to separate the time characteristics of the guardrail pictures;
step S4: merging the output of the separated spatial dimension characteristic with the output of the separated time characteristic;
step S5: and judging the displacement degree of the guardrail by using an optical flow method, wherein if the displacement degree exceeds 20%, the guardrail is damaged.
The optical flow method comprises the steps of calculating the displacement of each pixel point of a picture of a guardrail shot by a camera module and a picture of a perfect guardrail after the time delta t, judging the displacement degree of the guardrail, wherein the constraint equation of the image is as shown in the formula:
A(i,j,k,t)=A(i+Δi,j+Δj,k+Δk,t+Δt),
a (i, j, k, t) represents the pixel at the location of time (i, j, k) t.
The optical flow method is a method for calculating motion information of an object between adjacent frames by using the change of pixels in an image sequence in a time domain and the correlation between adjacent frames to find the corresponding relationship between a previous frame and a current frame.
The visual need is selectively filtered and the process of selecting a portion of the visual input according to its location is described as spatial attention. Guardrail pictures are various in form, and from intact guardrail pictures to damaged guardrail pictures, the difference in space and time is large. The guardrail photos in different periods have obvious characteristics when the guardrail is seriously damaged, and the recognition rate is high; however, when the guardrail is damaged in the initial stage, the features only account for a small proportion in the whole scene, the features are not obvious, and great challenges are provided for identifying the damage degree of the guardrail.
Spatial feature enhancement network using AlexNet's double convolution neural network structure, for VtAfter the guardrail picture at the moment is convoluted by a convolution neural network, obtaining a spatial characteristic output BtTo V pairt-1After the convolutional neural network convolution is carried out on the guardrail picture at the moment, spatial feature output B is obtainedt-1
Bt=(c1,...,cd)T
Figure BDA0003420262120000081
CDIs the output characteristic of the full link layer of the convolutional neural network, Bt-1Is calculated as follows:
Bt-1=(m1,...,md)T
Figure BDA0003420262120000091
by VtTemporal spatial feature minus Vt-1The spatial characteristics of the moment are different, namely different spatial characteristics of 2 pictures are obtained, and background interference is removed;
O′t=Relu(Bt-Bt-1)。
AlexNet is a LeNet-based method, and applies the basic principle of CNN to very deep and wide networks. The ReLU is successfully used as an activation function of the CNN, the effect of the CNN is verified to exceed the Sigmoid in a deeper network, and the problem of gradient dispersion of the Sigmoid in the deeper network is successfully solved; dropout was used during training to randomly ignore a portion of the neurons to avoid model overfitting. Dropout has a separate paper discussion, but AlexNet has put it into practical use, confirming its effect through practice. Dropout is used in AlexNet mainly for the last few fully connected layers; overlapping maximum pooling is used in CNN. Average pooling is commonly used in the CNN before, and AlexNet completely uses maximum pooling to avoid the fuzzification effect of the average pooling. In addition, the AlexNet provides that the yielding length is smaller than the size of the pooling core, so that the outputs of the pooling layers are overlapped and covered, and the feature richness is improved; an LRN layer is provided, a competition mechanism is established for the activity of local neurons, so that the response value is relatively larger, other neurons with smaller feedback are inhibited, and the generalization capability of the model is enhanced; the CUDA is used for accelerating the training of the deep convolutional network, and a large amount of matrix operations during the neural network training are processed by utilizing the strong parallel computing capability of the GPU. AlexNet uses two GTX 580 GPUs for training, and a single GTX 580 has only 3GB of video memory, which limits the maximum size of the trainable network. The authors therefore distributed AlexNet over both GPUs, storing half of the neuron parameters in the video memory of each GPU. Because the GPUs are convenient to communicate, the video memories can be mutually accessed without a host memory, and the simultaneous use of a plurality of GPUs is also very efficient. Meanwhile, the AlexNet design enables communication between GPUs to be carried out only on certain layers of a network, and performance loss of communication is controlled; data enhancement, randomly cutting areas 224 × 224 (and horizontally flipped mirror images) from the 256 × 256 original image, corresponds to an increase in data size of 2048 times. If no data enhancement is performed, the CNNs with a plurality of parameters can be trapped in overfitting only by the original data size, and the overfitting can be greatly reduced after the data enhancement is performed, so that the generalization capability is improved. During prediction, the four corners of the picture are added with the middle 5 positions, left and right turning is carried out, 10 pictures are obtained in total, prediction is carried out on the pictures, and the average value of 10 results is obtained. Meanwhile, AlexNet can carry out PCA processing on RGB data of the image, Gaussian disturbance with the standard deviation of 0.1 is carried out on the main component, some noise is added, and the error rate can be reduced by 1% by the Trick.
A picture can be divided into two parts, spatial and temporal. The spatial portion information refers to surface information of the independent frame, regarding an object, a scene, and the like; the temporal portion information refers to the optical flow between picture frames, carrying the motion information from frame to frame. The method is different from the traditional shallow learning that the characteristics need to be artificially determined according to experience and an algorithm, and the convolutional neural network autonomously learns the characteristics layer by layer without relying on the knowledge and experience in advance of a designer, so that the direct end-to-end learning from an original number to an objective function is realized, a good effect is obtained particularly in the field of image recognition, and the image depth characteristics can be effectively extracted.
The dual stream refers to a temporal stream and a spatial stream, which are used to capture spatial feature information and temporal feature information of the guardrail picture, respectively.
The output of the separated space dimension characteristic and the output of the separated time characteristic are fused, and 2 pictures (V)t-1,Vt) Converting 2 pictures into optical flow pictures V by an optical flow methodt', then the optical flow picture Vt' convolution result B oft' output result O with spatial feature enhancement networktPerforming a fusion, wherein:
BO′t=Concat(B′t+Ot),
the RGB data set and the optical flow data set have corresponding relation, and the optical flow data set is calculated from front and back frames of the RGB data set.
The invention provides an intelligent detection system and method for a damaged road guardrail, which designs a guardrail picture signal acquisition and processing system taking an FPGA control chip as a core, utilizes an image optical flow method to judge the displacement degree of the guardrail, and if the displacement degree exceeds 20 percent, the guardrail is damaged.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various equivalent changes, modifications, substitutions and alterations can be made herein without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims and their equivalents.

Claims (9)

1. An intelligent detection system for a damaged road guardrail is characterized in that the system is a set of image acquisition and processing system, and comprises a camera module, an analog signal conditioning circuit, a pulse signal conditioning circuit, an A/D conversion module, a ZigBee module and a display module;
the system utilizes the FPGA control chip, the picture of the guardrail to be detected is input into the FPGA control chip through the camera module, a static RGB data set and a light stream data set are respectively calculated, the displacement change is converted into the voltage change output, the voltage change output is sent into the FPGA control chip after passing through the pulse signal conditioning circuit, the encoder outputs the guardrail deformation rotating angle in a pulse form, the FPGA control chip is sent into the FPGA control chip after passing through the conditioning circuit, the FPGA control chip processes the acquired data, judges whether the displacement signal of the picture exceeds a threshold value or not, and displays the detection result on the liquid crystal display, the acquired guardrail picture is sent to the FPGA control chip through the ZigBee module and is processed.
2. The intelligent detection system for the damaged road guardrail as claimed in claim 1, wherein the image of the guardrail to be detected is input into the FPGA control chip through the camera module, and the analog signal is converted into digital quantity through the A/D conversion module and is transmitted into the FPGA control chip through the analog signal conditioning circuit; the pulse signal is sent to the FPGA control chip after passing through the pulse signal conditioning circuit, and the FPGA control chip processes the acquired data, displays the processed data on the OLCD display module and sends the processed data to the ZigBee module.
3. A system as claimed in claim 1, wherein said analog signal conditioning circuit utilizes two stages of amplification, first amplification, filtering and then amplification; the differential amplifier circuit is used as a preceding stage amplifier circuit, the output signal of the pressure sensor is amplified by 20 times and filtered by a low pass filter, the high frequency interference is filtered, the amplified signal enters a secondary amplifier circuit and is amplified by 20 times, and finally the signal is output in the range of 0-5V, so that the interference of a common mode signal is eliminated.
4. The intelligent detection system for the damaged road guardrail as claimed in claim 1, wherein the encoder utilizes a 360-degree rotary encoder with a rated voltage of DC 5V, the output is TTL pulse signal, and the signal is shaped by a 74ls138 chip and then transmitted to the FPGA control chip;
the OLCD display module utilizes an IIC/SPI liquid crystal serial port display with 1.3 inches;
the camera module utilizes an HF899USB industrial high-definition camera to shoot at 135-degree wide angle and with distortion 1080P.
5. The intelligent detection system for the damaged road guardrail as claimed in claim 1, wherein in the aspect of FPGA control chip software design, the input of picture and pressure parameters and the collection and processing of analog signals and pulse signals are carried out, and the analog signals are converted by an A/D converter carried by an FPGAEP4CE6F17C8N to obtain corresponding digital quantities; the pulse signals are counted by triggering external interruption of the FPGA control chip, and after calculation and processing and qualification judgment, the data are displayed on the OLCD and transmitted to the ZigBee module through a serial port.
6. A method for intelligently detecting a damaged road guardrail according to any one of claims 1-5, characterized by comprising the following processes:
static input, static output, the input is a set of continuous guardrail pictures, the output comprises two types of guardrail damage and non-guardrail damage, and the algorithm comprises the following steps:
step S1: inputting a guardrail picture, and respectively making a static RGB data set and an optical flow data set of the guardrail picture;
step S2: convolving the static RGB data set by using a spatial feature enhancement network to separate spatial dimension features of the guardrail picture;
step S3: convolving the optical flow data set, and extracting optical flow information from adjacent pictures as input so as to separate the time characteristics of the guardrail pictures;
step S4: merging the output of the separated spatial dimension characteristic with the output of the separated time characteristic;
step S5: and judging the displacement degree of the guardrail by using an optical flow method, wherein if the displacement degree exceeds 20%, the guardrail is damaged.
7. The system and the method for intelligently detecting the damaged road guardrail as claimed in claim 6, wherein the optical flow method is used for judging the displacement degree of the guardrail by calculating the displacement of each pixel point of the picture of the guardrail and the picture of the intact guardrail shot by the camera module after the time Δ t, and the constraint equation of the image is as shown in the formula:
A(i,j,k,t)=A(i+Δi,j+Δj,k+Δk,t+Δt),
a (i, j, k, t) represents the pixel at the location of time (i, j, k) t.
8. An intelligent detection system and method for damaged road guardrails according to claim 6, wherein the spatial feature enhancement network utilizes a dual convolutional neural network structure of AlexNet for VtAfter the guardrail picture at the moment is convoluted by a convolution neural network, obtaining a spatial characteristic output BtTo V pairt-1After the convolutional neural network convolution is carried out on the guardrail picture at the moment, spatial feature output B is obtainedt-1
Bt=(c1,...,cd)T
Figure FDA0003420262110000031
CDIs the output characteristic of the full link layer of the convolutional neural network, Bt-1Is calculated as follows:
Bt-1=(m1,...,md)T
Figure FDA0003420262110000032
by VtTemporal spatial feature minus Vt-1The spatial characteristics of the moment are different, namely different spatial characteristics of 2 pictures are obtained, and background interference is removed;
O′t=Relu(Bt-Bt-1)。
9. an intelligent detection system and method for damaged road guardrail as claimed in claim 6, wherein the output of separating spatial dimension feature and the output of separating temporal feature are fused to 2 pictures (V)t-1,Vt) Converting 2 pictures into optical flow pictures V by an optical flow methodt', then the optical flow picture Vt' convolution result B oft' output result O with spatial feature enhancement networktPerforming a fusion, wherein:
BO′t=Concat(B′t+Ot),
the RGB data set and the optical flow data set have corresponding relation, and the optical flow data set is calculated from front and back frames of the RGB data set.
CN202111559615.4A 2021-12-20 2021-12-20 Intelligent detection system and method for damaged road guardrail Pending CN114299308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111559615.4A CN114299308A (en) 2021-12-20 2021-12-20 Intelligent detection system and method for damaged road guardrail

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111559615.4A CN114299308A (en) 2021-12-20 2021-12-20 Intelligent detection system and method for damaged road guardrail

Publications (1)

Publication Number Publication Date
CN114299308A true CN114299308A (en) 2022-04-08

Family

ID=80967671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111559615.4A Pending CN114299308A (en) 2021-12-20 2021-12-20 Intelligent detection system and method for damaged road guardrail

Country Status (1)

Country Link
CN (1) CN114299308A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114996633A (en) * 2022-06-24 2022-09-02 中用科技有限公司 AI intelligent partial discharge detection method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069185A1 (en) * 2010-09-21 2012-03-22 Mobileye Technologies Limited Barrier and guardrail detection using a single camera
CN104308060A (en) * 2014-10-09 2015-01-28 济南大学 Online monitoring system for steel ball cold heading forming and monitoring method
CN108918532A (en) * 2018-06-15 2018-11-30 长安大学 A kind of through street traffic sign breakage detection system and its detection method
CN111797803A (en) * 2020-07-15 2020-10-20 郑州昂达信息科技有限公司 Road guardrail abnormity detection method based on artificial intelligence and image processing
WO2021035807A1 (en) * 2019-08-23 2021-03-04 深圳大学 Target tracking method and device fusing optical flow information and siamese framework
CN112781654A (en) * 2020-12-31 2021-05-11 西南交通大学 Crack steel rail gap fault detection system
EP3859157A1 (en) * 2018-09-29 2021-08-04 Healtell (Guangzhou) Medical Technology Co., Ltd. Microfluidic pump-based infusion anomaly state detection and control system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069185A1 (en) * 2010-09-21 2012-03-22 Mobileye Technologies Limited Barrier and guardrail detection using a single camera
CN104308060A (en) * 2014-10-09 2015-01-28 济南大学 Online monitoring system for steel ball cold heading forming and monitoring method
CN108918532A (en) * 2018-06-15 2018-11-30 长安大学 A kind of through street traffic sign breakage detection system and its detection method
EP3859157A1 (en) * 2018-09-29 2021-08-04 Healtell (Guangzhou) Medical Technology Co., Ltd. Microfluidic pump-based infusion anomaly state detection and control system
WO2021035807A1 (en) * 2019-08-23 2021-03-04 深圳大学 Target tracking method and device fusing optical flow information and siamese framework
CN111797803A (en) * 2020-07-15 2020-10-20 郑州昂达信息科技有限公司 Road guardrail abnormity detection method based on artificial intelligence and image processing
CN112781654A (en) * 2020-12-31 2021-05-11 西南交通大学 Crack steel rail gap fault detection system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周泳;陶兆胜;阮孟丽;王丽华;: "基于FlowNet2.0网络的目标光流检测方法", 龙岩学院学报, no. 02, 25 March 2020 (2020-03-25) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114996633A (en) * 2022-06-24 2022-09-02 中用科技有限公司 AI intelligent partial discharge detection method
CN114996633B (en) * 2022-06-24 2024-05-14 中用科技有限公司 AI intelligent partial discharge detection method

Similar Documents

Publication Publication Date Title
Yin et al. Recurrent convolutional network for video-based smoke detection
CN107609491A (en) A kind of vehicle peccancy parking detection method based on convolutional neural networks
CN107274445A (en) A kind of image depth estimation method and system
CN107169993A (en) Detection recognition method is carried out to object using public security video monitoring blurred picture
CN103778786A (en) Traffic violation detection method based on significant vehicle part model
Kruthiventi et al. Low-light pedestrian detection from RGB images using multi-modal knowledge distillation
CN107301375A (en) A kind of video image smog detection method based on dense optical flow
WO2024037408A1 (en) Underground coal mine pedestrian detection method based on image fusion and feature enhancement
CN113780132A (en) Lane line detection method based on convolutional neural network
CN112084928A (en) Road traffic accident detection method based on visual attention mechanism and ConvLSTM network
CN108038486A (en) A kind of character detecting method
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
CN114299308A (en) Intelligent detection system and method for damaged road guardrail
CN114708566A (en) Improved YOLOv 4-based automatic driving target detection method
Tao et al. Smoky vehicle detection based on range filtering on three orthogonal planes and motion orientation histogram
CN110348329B (en) Pedestrian detection method based on video sequence interframe information
Li et al. RailNet: An information aggregation network for rail track segmentation
Saif et al. Crowd density estimation from autonomous drones using deep learning: challenges and applications
Shahbaz et al. Deep atrous spatial features-based supervised foreground detection algorithm for industrial surveillance systems
CN115019340A (en) Night pedestrian detection algorithm based on deep learning
Ma et al. Convolutional three-stream network fusion for driver fatigue detection from infrared videos
Lin et al. Airborne moving vehicle detection for urban traffic surveillance
CN116824630A (en) Light infrared image pedestrian target detection method
CN115909192A (en) Pedestrian detection method based on improved EfficientDet
CN115171214A (en) Construction site abnormal behavior detection method and system based on FCOS target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination