CN112115767A - Tunnel foreign matter detection method based on Retinex and YOLOv3 models - Google Patents
Tunnel foreign matter detection method based on Retinex and YOLOv3 models Download PDFInfo
- Publication number
- CN112115767A CN112115767A CN202010764265.4A CN202010764265A CN112115767A CN 112115767 A CN112115767 A CN 112115767A CN 202010764265 A CN202010764265 A CN 202010764265A CN 112115767 A CN112115767 A CN 112115767A
- Authority
- CN
- China
- Prior art keywords
- image
- tunnel
- illumination
- retinex
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a tunnel foreign matter detection method based on Retinex and YOLOv3 models, which comprises the following steps: shooting a tunnel image by using a quad-rotor unmanned aerial vehicle, and transmitting the image to a ground station for preprocessing; estimating illumination components from the obtained tunnel low-illumination images by using a guided filtering method; resolving the actual color of the image in a logarithmic domain according to a Retinex theory, and reducing image distortion by using a multi-scale color recovery algorithm to finish low-illumination enhancement; carrying out data annotation on the processed tunnel image data set by using labelimg software; the YOLOv3 network was trained with the labeled dataset to identify foreign objects in the tunnel images. Compared with the common tunnel foreign matter detection method, the method is not easily affected by insufficient light when detecting the foreign matter, and has better environmental adaptability, so that the tunnel foreign matter detection task can be performed more stably and accurately.
Description
Technical Field
The invention belongs to the field of tunnel foreign matter detection, and particularly relates to a tunnel foreign matter detection method based on Retinex and YOLOv3 models.
Background
The safety operation of the railway system becomes more important due to the increasingly dense railway operation network, and effective measures need to be taken to reduce the influence of foreign matters on the safety operation of the railway as much as possible. In railway foreign matter detection, contact detection and non-contact detection can be divided according to whether detection equipment contacts track equipment, wherein the contact detection method has the advantages of large installation work amount, high cost, very limited foreign matter detection types and less application in engineering practice; the non-contact detection method is the mainstream method for detecting foreign matters in railways due to the low installation cost, the small installation engineering amount and the wide foreign matter detection range. The non-contact detection method mainly refers to that a camera is used for collecting images and videos around a rail to monitor, and the existing video monitoring needs a specially-assigned person to monitor the content of the video images, so that the working intensity is high, and the foreign matters of the railway monitoring images can be automatically identified by using the current popular technology, namely deep learning.
In recent years, numerous scholars at home and abroad also provide operational algorithm schemes in the aspect of railway foreign matter intrusion detection by using video information, practice operation is carried out in actual tasks, and certain effects are achieved in the aspects of timely eliminating railway foreign matter threats and reducing railway accidents. However, most of the methods are not very accurate, and further research needs to be performed on the accuracy and reliability of the detection algorithm, and in addition, the research and the experiment are mainly performed on visible light images in the daytime, and the situations of tunnels and low illumination at night are not fully considered.
Disclosure of Invention
The invention aims to provide a tunnel foreign matter detection method based on Retinex and YOLOv3 models, which is suitable for tunnel environments and low-illumination conditions at night.
The technical solution for realizing the purpose of the invention is as follows: a tunnel foreign matter detection method based on Retinex and YOLOv3 models comprises the following steps:
step 4, performing data annotation on the processed tunnel image data set by using labelimg software;
and 5, training a YOLOv3 network by using the labeled data set, thereby identifying the foreign matters in the tunnel image.
Compared with the prior art, the invention has the following remarkable advantages: (1) the transmitted video image information is subjected to noise reduction by a Gaussian filtering method, so that high-frequency noise caused by the environment can be eliminated, and the quality of the image is ensured; (2) the Retinex algorithm is optimized by using a guided filtering method, and the illumination component is estimated from the obtained tunnel low-illumination image through guided filtering, so that compared with a common isotropic filtering method, the method can retain the edge information of the image to a greater extent, is beneficial to extracting the edge texture information of the foreign matter by using a subsequent target detection algorithm, and further improves the identification accuracy; (3) the Retinex theory is utilized to enhance the low-illumination image, so that the influence of illumination can be better eliminated, the color information of the object is reduced, and the foreign matter detection under the low-illumination condition is realized; (4) common foreign matters in the tunnel are trained and recognized by utilizing the YOLOv3 network, specific foreign matter types and position coordinates can be judged, consumption of human resources is reduced, and intelligent development in the detection field is promoted.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
Fig. 1 is an overall flowchart of the tunnel foreign matter detection method of the present invention.
FIG. 2 is a flow chart of low illumination enhancement performed by Retinex according to the present invention.
FIG. 3 is a schematic diagram of the structure of the Darknet-53 network of the present invention.
FIG. 4 is a diagram illustrating the detection results in the embodiment of the present invention.
Fig. 5 is a schematic structural diagram of the YOLOv3 detection network in the present invention.
Detailed Description
As shown in fig. 1, the tunnel foreign object detection method based on Retinex and YOLOv3 models of the present invention includes the following steps:
step 4, performing data annotation on the processed tunnel image data set by using labelimg software;
and 5, training a YOLOv3 network by using the labeled data set, thereby identifying the foreign matters in the tunnel image.
Further, the method for acquiring and preprocessing the tunnel low-illumination image in the step 1 comprises the following specific steps:
step 1.1: acquiring images in a tunnel environment by using a quad-rotor unmanned aerial vehicle, and transmitting the images back to a ground station;
step 1.2: carrying out noise reduction processing on the obtained image information by using a Gaussian filtering method so as to eliminate high-frequency noise brought by the environment;
further, the step 2 of estimating an illumination component from the obtained tunnel low-illumination image by using a guided filtering method specifically includes:
step 2.1: inputting the obtained tunnel low-illumination image p and the guide image I into a guide filter to obtain an illumination component, namely an output image q:
wherein i and j represent the pixel subscripts, W, respectivelyijIs a filter kernel that is only relevant for the guide image I.
Step 2.2: using image p and guide image I in the filter window wkThe local linear relationship existing above, q can be expressed as:
in the formula, akAnd bkRespectively representing constant coefficients, wkIs a defined window on the image.
Step 2.3: for each filtering window, an optimization process is performed using least squares, and equation (4) is obtained for equations (3) and (2):
wherein argmin is the minimum value of the post-equation, q is the output image, p is the obtained low-illumination image of the tunnel, akAnd bkRespectively, represent constant coefficients.
Step 2.4: introducing a regularization parameter to obtain a loss function within the filter window:
the optimization solution of equation (5) can be obtained:
in the formula, mukAndrespectively representing the guide image I in the window wkThe mean and variance in, | w | is the window wkThe number of the middle pixel points is increased,for obtaining the low-illumination image of the tunnel in the window wkAverage value of (1).
Step 2.5: when the guide image I is identical to the obtained tunnel low-illumination image p, akAnd bkCan be simplified so that the variance in the imageLarger area can be guaranteedEdge information, variance in imageThe smaller area can be subjected to smooth filtering, and then illumination estimation components with better quality are obtained:
bk=(1-ak)uk (9)
q=akI+bk (10)
further, in step 3, the actual color of the image is decomposed in a logarithmic domain according to the Retinex theory, and the distortion of the image is reduced by using a multi-scale color recovery algorithm to complete the enhancement of low illumination, specifically:
step 3.1: according to the illumination estimation component q and the tunnel low-illumination image p obtained in the step 2, resolving the actual color of the image in a logarithmic domain by utilizing a Retinex theory:
r(i,j)=logR(i,j)=logp(i,j)-logq(i,j) (11)
where i and j represent the pixel indices, respectively, R is the actual color of the image, log is the logarithm operation, and R is the logarithm of the actual color of the image.
Step 3.2: repeating the step 3.1 to obtain the actual colors of the three channels of the image, and in order to reduce color distortion, repairing the actual colors of the three channels by using a multi-scale color recovery algorithm:
Rr(i,j):Rg(i,j):Rb(i,j)=pr(i,j):pg(i,j):pb(i,j) (12)
in the formula, Rr, Rg, and Rb respectively represent specific components of the actual color R of the image in three channels, and pr, pg, and pb respectively represent specific components of the tunnel low illuminance image p in three channels.
Further, step 4, data labeling is carried out on the processed tunnel image data set by using labelimg software, the data set comprises a wrench, a bolt, a water bottle and a pedestrian, and the format of the data labeling is mainly an xml format.
Further, in step 5, training the YOLOv3 network by using the labeled data set, so as to identify the foreign object in the tunnel image, specifically:
step 5.1: sending the tunnel image data set processed in the step 3 and the xml annotation information obtained in the step 4 into a Darknet-53 network, and alternately extracting image characteristics by using a convolution layer and a residual error layer;
step 5.2: and constructing detectors with three different scales, and respectively predicting on the three scales by using the obtained multilayer characteristic diagram so as to judge whether foreign matters appear in the tunnel image.
The present invention will be described in detail below with reference to the accompanying drawings and examples.
Examples
With reference to fig. 1, the tunnel foreign object detection method based on Retinex and YOLOv3 models includes the following steps:
step 1: utilize four rotor unmanned aerial vehicle to shoot the tunnel image to transmit the image to ground satellite station and carry out the preliminary treatment, specifically as follows:
step 1.1: acquiring images in a tunnel environment by using a quad-rotor unmanned aerial vehicle, and transmitting the images back to a ground station;
step 1.2: carrying out noise reduction processing on the obtained image information by using a Gaussian filtering method so as to eliminate high-frequency noise brought by the environment;
step 2: the method for estimating the illumination component from the obtained tunnel low-illumination image by using the guided filtering method specifically comprises the following steps:
step 2.1: inputting the obtained tunnel low-illumination image p and the guide image I into a guide filter to obtain an illumination component, namely an output image q:
wherein i and j represent the pixel subscripts, W, respectivelyijIs a filter kernel that is only relevant for the guide image I.
Step 2.2: using image p and guide image I in the filter window wkPresent on the partLinear relationship, q can be expressed as:
in the formula, akAnd bkRespectively representing constant coefficients, wkIs a defined window on the image.
Step 2.3: for each filtering window, an optimization process is performed using least squares, and equation (4) is obtained for equations (3) and (2):
wherein argmin is the minimum value of the post-equation, q is the output image, p is the obtained low-illumination image of the tunnel, akAnd bkRespectively, represent constant coefficients.
Step 2.4: introducing a regularization parameter to obtain a loss function within the filter window:
the optimization solution of equation (5) can be obtained:
in the formula, mukAndrespectively representing the guide image I in the window wkThe mean and variance in, | w | is the window wkThe number of the middle pixel points is increased,for obtaining the low-illumination image of the tunnel in the window wkAverage value of (1).
Step 2.5: when the guide image I is identical to the obtained tunnel low-illumination image p, akAnd bkCan be simplified so that the variance in the imageLarger regions can retain edge information, in image varianceThe smaller area can be subjected to smooth filtering, and then illumination estimation components with better quality are obtained:
bk=(1-ak)uk (9)
q=akI+bk (10)
the concrete implementation is as follows:
in areas where the variance of the image is large, such as the edge portions of the image, there areAt this time ak≈1,bk0 is added, so that the output image q is equal to I, and the edge information of the image is kept; in the region where the image variance is small, there areAt this time ak≈0,bk≈ukSo that the output image q is ukAnd outputting the average value to finish the estimation of the illumination.
And step 3: according to Retinex theory, the actual color of the image is decomposed in a logarithmic domain, the distortion of the image is reduced by utilizing a multi-scale color recovery algorithm, and the low illumination enhancement is completed, specifically as follows:
step 3.1: according to the illumination estimation component q and the tunnel low-illumination image p obtained in the step 2, resolving the actual color of the image in a logarithmic domain by utilizing a Retinex theory:
r(i,j)=logR(i,j)=logp(i,j)-logq(i,j) (11)
in the formula, i and j respectively represent pixel subscripts, R is the actual color of the image, log is the logarithm operation, and R is the logarithm of the actual color of the image, and the specific implementation is shown in fig. 2.
Step 3.2: repeating the step 3.1 to obtain the actual colors of the three channels of the image, and in order to reduce color distortion, repairing the actual colors of the three channels by using a multi-scale color recovery algorithm:
Rr(i,j):Rg(i,j):Rb(i,j)=pr(i,j):pg(i,j):pb(i,j) (12)
in the formula, Rr, Rg, and Rb respectively represent specific components of the actual color R of the image in three channels, and pr, pg, and pb respectively represent specific components of the tunnel low illuminance image p in three channels.
And 4, step 4: and performing data annotation on the processed tunnel image data set by using labelimg software.
And 5: training the Yolov3 network with the labeled dataset to identify foreign objects in the tunnel images as follows
Step 5.1: and (3) sending the tunnel image data set processed in the step (3) and the xml annotation information obtained in the step (4) into a Darknet-53 network, and alternately extracting image features by using a convolution layer and a residual error layer, wherein the method specifically comprises the following steps:
a Darknet-53 classification network is used for extracting multi-layer features from the 416 × 416 images of 3 channels, wherein the structural diagram of the Darknet-53 network is shown in FIG. 3 and mainly comprises convolution layers of 3 × 3 and 1 × 1 and residual layers which are alternately combined.
Step 5.2: three detectors with different scales are constructed, and prediction is respectively carried out on the three scales by using the obtained multilayer characteristic diagram, so that whether foreign matters appear in the tunnel image or not is judged, and the detection result is shown in fig. 4.
Three detectors are constructed and respectively predicted on three scales, the sensing fields of feature maps of the three scales are respectively 13 × 13, 26 × 26, 52 × 52 and 13 × 13 are large, the detectors are used for detecting a large target in a pedestrian, then the up-sampling features and the features with finer granularity in the early feature mapping can be found by the two scales 26 × 26 and 52 × 52 and are respectively used for extracting fine targets such as a wrench and a bolt, and therefore the multi-scale tunnel foreign matter detection task is achieved, and the structural schematic diagram of the Yolov3 detection network is shown in FIG. 5.
In summary, aiming at the specific task of detecting the foreign matters in the low-illumination tunnel scene, the invention firstly utilizes the guiding filtering correlation theory to estimate the illumination component of the low-illumination image in the tunnel scene, and compared with the common same-nature filtering method, the invention can better retain the high-frequency edge information of the image and is beneficial to the extraction and identification of the later-period edge information; then, enhancing the low-illumination image by utilizing a Retinex theory with color recovery to ensure that the image quality is high and the noise is reduced, and recovering the foreign matters from the disordered image background; and finally, training and identifying the corrected picture by using a YOLOv3 network, so that the aim of automatically detecting the tunnel foreign matters can be fulfilled.
Claims (7)
1. A tunnel foreign matter detection method based on Retinex and YOLOv3 models is characterized by comprising the following steps:
step 1, shooting a tunnel image by using a quad-rotor unmanned aerial vehicle, and transmitting the image to a ground station for preprocessing;
step 2, estimating illumination components from the obtained tunnel low-illumination images by using a guided filtering method;
step 3, resolving the actual color of the image in a logarithmic domain according to a Retinex theory, and reducing image distortion by using a multi-scale color recovery algorithm to finish low-illumination enhancement;
step 4, performing data annotation on the processed tunnel image data set by using labelimg software;
and 5, training a YOLOv3 network by using the labeled data set, thereby identifying the foreign matters in the tunnel image.
2. The method for detecting tunnel foreign objects based on Retinex and Yolov3 models as claimed in claim 1, wherein the method for acquiring and preprocessing the low-illumination images of the tunnel in step 1 comprises:
step 1.1: acquiring images in a tunnel environment by using a quad-rotor unmanned aerial vehicle, and transmitting the images back to a ground station;
step 1.2: and carrying out noise reduction on the obtained image information by using a Gaussian filtering method to eliminate high-frequency noise brought by the environment.
3. The method for detecting tunnel foreign object based on Retinex and YOLOv3 models as claimed in claim 1, wherein the step 2 uses a guided filtering method to estimate the illumination component from the obtained low-illumination image of the tunnel, specifically:
step 2.1: inputting the obtained tunnel low-illumination image p and the guide image I into a guide filter to obtain an illumination component, namely an output image q:
wherein i and j represent the pixel subscripts, W, respectivelyijIs a filter kernel that is only relevant to the guide image I;
step 2.2: using image p and guide image I in the filter window wkThe local linear relationship existing above, q is expressed as:
in the formula, akAnd bkEach of which represents a constant coefficient of the power,wka window determined for the image;
step 2.3: for each filtering window, optimization processing is performed using least squares, and from equation (3) and equation (2), equation (4) is obtained:
wherein argmin is the minimum value of the post-equation, q is the output image, p is the obtained low-illumination image of the tunnel, akAnd bkRespectively represent constant coefficients;
step 2.4: introducing a regularization parameter to obtain a loss function within the filter window:
the optimization solution of equation (5) can be obtained:
in the formula, mukAndrespectively representing the guide image I in the window wkThe mean and variance in, | w | is the window wkThe number of the middle pixel points is increased,for obtaining the low-illumination image of the tunnel in the window wkAverage value of (1);
step 2.5: when the guide image I is identical to the obtained tunnel low-illumination image p, akAnd bkCan simplify:
bk=(1-ak)uk (9)
q=akI+bk (10)。
4. the method for detecting tunnel foreign matter based on Retinex and YOLOv3 models as claimed in claim 1, wherein step 3 is to decompose the actual color of the image in a logarithmic domain according to Retinex theory, and to reduce the image distortion by using a multi-scale color recovery algorithm, so as to complete the low illumination enhancement, specifically:
step 3.1: according to the illumination estimation component q and the tunnel low-illumination image p obtained in the step 2, resolving the actual color of the image in a logarithmic domain by utilizing a Retinex theory:
r(i,j)=logR(i,j)=logp(i,j)-logq(i,j) (11)
in the formula, i and j respectively represent pixel subscripts, R is the actual color of the image, log is a logarithm operation, and R is the logarithm of the actual color of the image;
step 3.2: repeating the step 3.1 to obtain the actual colors of the three channels of the image, and repairing the actual colors of the three channels by using a multi-scale color recovery algorithm:
Rr(i,j):Rg(i,j):Rb(i,j)=pr(i,j):pg(i,j):pb(i,j) (12)
in the formula, Rr, Rg, and Rb respectively represent specific components of the actual color R of the image in three channels, and pr, pg, and pb respectively represent specific components of the tunnel low illuminance image p in three channels.
5. The method for detecting tunnel foreign matter based on Retinex and YOLOv3 models as claimed in claim 1, wherein step 4 comprises performing data annotation on the processed tunnel image data set by using labelimg software, wherein the format of the data annotation is mainly xml format.
6. The method for detecting foreign matters in tunnels based on Retinex and YOLOv3 models according to claim 5, wherein the data sets comprise wrenches, bolts, water bottles and pedestrians.
7. The method for detecting foreign objects in tunnels based on Retinex and YOLOv3 models as claimed in claim 1, wherein step 5 is to train YOLOv3 network by using labeled data set, so as to identify the foreign objects in the tunnel images, specifically:
step 5.1: sending the tunnel image data set processed in the step 3 and the xml annotation information obtained in the step 4 into a Darknet-53 network, and alternately extracting image characteristics by using a convolution layer and a residual error layer;
step 5.2: and constructing detectors with three different scales, and respectively predicting on the three scales by using the obtained multilayer characteristic diagram so as to judge whether foreign matters appear in the tunnel image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010764265.4A CN112115767B (en) | 2020-08-02 | 2020-08-02 | Tunnel foreign matter detection method based on Retinex and YOLOv3 models |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010764265.4A CN112115767B (en) | 2020-08-02 | 2020-08-02 | Tunnel foreign matter detection method based on Retinex and YOLOv3 models |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112115767A true CN112115767A (en) | 2020-12-22 |
CN112115767B CN112115767B (en) | 2022-09-30 |
Family
ID=73799707
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010764265.4A Active CN112115767B (en) | 2020-08-02 | 2020-08-02 | Tunnel foreign matter detection method based on Retinex and YOLOv3 models |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112115767B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113420716A (en) * | 2021-07-16 | 2021-09-21 | 南威软件股份有限公司 | Improved Yolov3 algorithm-based violation behavior recognition and early warning method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190301886A1 (en) * | 2018-03-29 | 2019-10-03 | Nio Usa, Inc. | Sensor fusion methods for augmented reality navigation |
CN110415193A (en) * | 2019-08-02 | 2019-11-05 | 平顶山学院 | The restored method of coal mine low-light (level) blurred picture |
CN110689531A (en) * | 2019-09-23 | 2020-01-14 | 云南电网有限责任公司电力科学研究院 | Automatic power transmission line machine inspection image defect identification method based on yolo |
-
2020
- 2020-08-02 CN CN202010764265.4A patent/CN112115767B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190301886A1 (en) * | 2018-03-29 | 2019-10-03 | Nio Usa, Inc. | Sensor fusion methods for augmented reality navigation |
CN110415193A (en) * | 2019-08-02 | 2019-11-05 | 平顶山学院 | The restored method of coal mine low-light (level) blurred picture |
CN110689531A (en) * | 2019-09-23 | 2020-01-14 | 云南电网有限责任公司电力科学研究院 | Automatic power transmission line machine inspection image defect identification method based on yolo |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113420716A (en) * | 2021-07-16 | 2021-09-21 | 南威软件股份有限公司 | Improved Yolov3 algorithm-based violation behavior recognition and early warning method |
CN113420716B (en) * | 2021-07-16 | 2023-07-28 | 南威软件股份有限公司 | Illegal behavior identification and early warning method based on improved Yolov3 algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN112115767B (en) | 2022-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110232380B (en) | Fire night scene restoration method based on Mask R-CNN neural network | |
WO2024037408A1 (en) | Underground coal mine pedestrian detection method based on image fusion and feature enhancement | |
CN106934374B (en) | Method and system for identifying traffic signboard in haze scene | |
CN114898296B (en) | Bus lane occupation detection method based on millimeter wave radar and vision fusion | |
CN111709888B (en) | Aerial image defogging method based on improved generation countermeasure network | |
CN108198417B (en) | A kind of road cruising inspection system based on unmanned plane | |
CN106919939B (en) | A kind of traffic signboard tracks and identifies method and system | |
CN102393901A (en) | Traffic flow information perception method based on hybrid characteristic and system thereof | |
CN114298948A (en) | Ball machine monitoring abnormity detection method based on PSPNet-RCNN | |
CN111161160A (en) | Method and device for detecting obstacle in foggy weather, electronic equipment and storage medium | |
FAN et al. | Robust lane detection and tracking based on machine vision | |
CN100546380C (en) | Target detection and tracking at night based on visual characteristic | |
CN112115767B (en) | Tunnel foreign matter detection method based on Retinex and YOLOv3 models | |
Lashkov et al. | Edge-computing-facilitated nighttime vehicle detection investigations with CLAHE-enhanced images | |
CN104822055A (en) | Infrared thermal image monitoring system against fog days and method | |
CN114359196A (en) | Fog detection method and system | |
CN117274967A (en) | Multi-mode fusion license plate recognition algorithm based on convolutional neural network | |
CN115147450B (en) | Moving target detection method and detection device based on motion frame difference image | |
CN117115616A (en) | Real-time low-illumination image target detection method based on convolutional neural network | |
Savakis et al. | Semantic background estimation in video sequences | |
CN116798117A (en) | Video understanding-based method for identifying abnormal actions under mine | |
WO2022267266A1 (en) | Vehicle control method based on visual recognition, and device | |
CN109359651A (en) | License plate positioning processor and positioning processing method thereof | |
Wang et al. | Low-light traffic objects detection for automated vehicles | |
Zhao et al. | Research on vehicle detection and vehicle type recognition under cloud computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |