CN114973216A - FOD detection method and system for multi-channel visual information fusion - Google Patents
FOD detection method and system for multi-channel visual information fusion Download PDFInfo
- Publication number
- CN114973216A CN114973216A CN202210641240.4A CN202210641240A CN114973216A CN 114973216 A CN114973216 A CN 114973216A CN 202210641240 A CN202210641240 A CN 202210641240A CN 114973216 A CN114973216 A CN 114973216A
- Authority
- CN
- China
- Prior art keywords
- image
- edge
- channel
- foreign matter
- laser scattering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The utility model relates to the technical field of foreign matter detection, and provides a FOD detection method and a FOD detection system of multi-channel visual information fusion, wherein the method comprises the following steps: acquiring an image to be detected, wherein the image to be detected comprises a gray level image, a laser scattering brightness image and a depth image; extracting foreign matter edges of the laser scattering brightness image to obtain a processed laser scattering brightness image; performing edge extraction processing on the depth image to obtain an edge image of the depth image; respectively taking the gray level image, the processed laser scattering brightness image and the edge image of the depth image as three color channels for fusion to obtain a multi-channel fused foreign matter image; and (4) carrying out foreign matter detection on the multi-channel fused foreign matter image to obtain the type and position of the foreign matter. Through multi-channel fusion, the fused image has three channel characteristics, the foreign matter edge is more obvious, the difficulty of visual detection of human eyes is overcome, and the detection efficiency and the reliability are improved.
Description
Technical Field
The disclosure relates to the technical field of foreign matter detection, in particular to an FOD detection method and system based on multi-channel visual information fusion.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The Foreign matters (Foreign Object Debris) in the airport runway refer to Foreign matters which may damage the airplane, such as screw and nut, metal fragments, stones, plastic fragments and the like, and have great safety hazards in the process of taking off and landing of the airplane. FOD not only threatens passenger's life safety, also can cause huge economic loss to airline and airport, FOD hidden danger is the problem that needs solve at present urgently.
At present, FOD detection methods of most airports in the world are still manual visual detection, which easily causes personnel fatigue, and has low efficiency and poor reliability. The foreign research on FOD detection starts earlier, and four typical FOD detection systems appear: the Tarsier system, the FOD Detect system, the FODFinder system, and the IFerret system. The first three adopt ground millimeter wave radar to detect and position the foreign matters on the runway, the millimeter wave radar system has higher cost and higher requirement on the size of the foreign matters; the IFerret system is composed of a high-definition camera and a data processing system, the system relies on the camera to acquire data, and the performance is poor in severe weather. It can be seen that the current detection method has high detection cost, is limited by foreign matters and detection environment, and has low detection effect.
Disclosure of Invention
The FOD detection method and system based on multi-channel visual information fusion are provided to solve the problems.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
one or more embodiments provide a method for detecting FOD by multi-channel visual information fusion, comprising the following steps:
acquiring an image to be detected, wherein the image to be detected comprises a gray level image, a laser scattering brightness image and a depth image;
extracting foreign matter edges of the laser scattering brightness image to obtain a processed laser scattering brightness image;
performing edge extraction processing on the depth image to obtain an edge image of the depth image;
respectively taking the gray level image, the processed laser scattering brightness image and the edge image of the depth image as three color channels for fusion to obtain a multi-channel fused foreign matter image;
and (4) carrying out foreign matter detection on the multi-channel fused foreign matter image to obtain the type and position of the foreign matter.
One or more embodiments provide a multi-channel visual information fused FOD detection system, comprising:
an acquisition module: the method comprises the steps of configuring to be used for obtaining an image to be detected, wherein the image to be detected comprises a gray level image, a laser scattering brightness image and a depth image;
the laser scattering brightness image processing module: the laser scattering brightness image processing device is configured to extract the foreign matter edge of the laser scattering brightness image to obtain a processed laser scattering brightness image;
a depth image processing module: the image processing method comprises the steps of performing edge extraction processing on a depth image to obtain an edge image of the depth image;
a fusion module: the image processing device is configured to respectively fuse the gray level image, the processed laser scattering brightness image and the edge image of the depth image as three color channels to obtain a multi-channel fused foreign matter image;
a detection module: the system is configured to detect the foreign matters according to the multi-channel fused foreign matter image, and obtain the types and the positions of the foreign matters.
One or more embodiments provide a multi-channel visual information fused FOD detection system, comprising: an image acquisition device and a processor;
the image acquisition device is a vehicle-mounted line scanning structured light camera and is used for acquiring an image to be detected;
the processor is configured to perform one of the above-described methods of FOD detection for multi-channel visual information fusion.
An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions, when executed by the processor, performing the steps of the above method.
Compared with the prior art, this disclosed beneficial effect does:
by collecting various types of images, the fused images have three channel characteristics, the foreign matter edges are more obvious, the difficulty of visual detection of human eyes is overcome, and the detection efficiency and reliability are improved; the foreign body detection device is free from the dependence of a camera on natural illumination, can be used in the night environment, and can effectively detect foreign bodies on the runway.
Advantages of the present disclosure, as well as advantages of additional aspects, will be described in detail in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure.
FIG. 1 is a flow chart of a detection method according to embodiment 1 of the disclosure;
fig. 2 is an example of a grayscale image collected by the vehicle-mounted line-scan structured light camera of embodiment 1 of the present disclosure;
FIG. 3 is a schematic diagram of an exemplary laser scattering brightness image processing of embodiment 1 of the present disclosure;
fig. 4 is a Laplacian operator of an example laser scattering brightness image processing of embodiment 1 of the present disclosure;
fig. 5(a) is a flowchart of a depth image processing method according to embodiment 1 of the present disclosure;
fig. 5(b) is an example depth image processing schematic diagram of embodiment 1 of the present disclosure;
fig. 6 is an x-direction Sobel edge detection operator of depth image processing of embodiment 1 of the present disclosure;
fig. 7 is a y-direction Sobel edge detection operator of depth image processing of embodiment 1 of the present disclosure;
FIG. 8 is a three-channel fused image of an exemplary image to be detected in embodiment 1 of the present disclosure;
fig. 9(a) is a graph showing a result of detecting a foreign object as forceps according to embodiment 1 of the present disclosure;
fig. 9(b) is a detection result diagram of the screw as the foreign matter in embodiment 1 of the present disclosure;
fig. 9(c) is a graph of the detection result that the foreign matter of embodiment 1 of the present disclosure is a stone;
fig. 9(d) is a graph showing the detection result of the foreign matter being vegetation in example 1 of the present disclosure;
fig. 9(e) is a graph showing the detection result of the plastic product as the foreign matter in example 1 of the present disclosure.
Detailed Description
The present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments in the present disclosure may be combined with each other. The embodiments will be described in detail below with reference to the accompanying drawings.
Example 1
In one or more embodiments, as shown in fig. 1 to 9, a method for detecting a FOD by multi-channel visual information fusion includes the following steps:
the method comprises the following steps of 1, obtaining an image to be detected, wherein the image to be detected comprises a gray level image, a laser scattering brightness image and a depth image;
step 3, performing edge extraction processing on the depth image to obtain an edge image of the depth image;
and 5, carrying out foreign matter detection on the multi-channel fused foreign matter image to obtain the type and position of the foreign matter.
The embodiment collects various images, the fused image has three channel characteristics, the foreign matter edge is more obvious, the difficulty of visual detection of human eyes is overcome, and the detection efficiency and reliability are improved; and the device gets rid of the dependence of a camera on natural illumination, can be used in the night environment, and can effectively detect foreign matters on the runway.
In the step 1, a vehicle-mounted line scanning structured light camera can be adopted for image acquisition, and an image to be detected is obtained at the same time.
Optionally, the tail part of the car can be arranged on the patrol car.
When the inspection vehicle moves to acquire an image to be detected of a runway in real time, the scanning structured light camera is provided with three shooting channels, gray information DC0(Data Channel0) which can be acquired simultaneously is a gray image, laser scattering brightness information DC1(Data Channel1) is a laser scattering brightness image, and depth information DC2(Data Channel2) is a depth image.
In step 2, extracting a foreign object edge from the laser scattering brightness image to obtain a processed laser scattering brightness image, and optionally, extracting the foreign object edge by using a Laplacian operator (Laplacian operator).
Optionally, as shown in fig. 3, a method for extracting a foreign object edge by using a Laplacian operator (Laplacian operator) specifically includes: and performing convolution operation on the laser scattering brightness image and a Laplace operator with a set size.
Specifically, the laplacian in this embodiment may be set to 3 × 3, and a specific operator is shown in fig. 4.
In the embodiment, the foreign matter edge is extracted from the laser scattering brightness image, the edges in the x direction and the y direction do not need to be detected respectively, an image meeting the requirements can be obtained only by one-time edge detection, and the detection efficiency is greatly improved.
In step 3, the depth image is processed, and the depth image is subjected to edge extraction processing to obtain an edge map of the depth image, optionally, a Sobel operator may be used to solve the edge of the depth image, as shown in fig. 5, the specific steps are as follows:
step 31, extracting the edge in the x direction, and performing convolution operation on the original image and an x-direction Sobel edge detection operator (as shown in fig. 6) to extract the edge in the x direction.
Step 32, extracting the edge in the y direction, and performing convolution operation on the original image and a y-direction Sobel edge detection operator (as shown in fig. 7) to extract the edge in the y direction.
And step 33, integrating the edge information in the x direction and the y direction to obtain the edge of the whole image, so as to obtain an edge map of the depth image.
The method for integrating the edge information in the x direction and the y direction to obtain the edge of the whole image specifically comprises the following steps: and (3) obtaining the edge information of the whole image by summing the absolute values of the pixel values of the corresponding pixels of the two images obtained in the step (1) and the step (2).
In this embodiment, the edges in the x direction and the y direction are extracted first, and then the edge information in the two directions is integrated to obtain the edge of the whole image, so that the position where the depth in the depth image changes can be obtained, that is, the contour edge of the foreign object in the depth image is enhanced, and the purpose of separating the foreign object from the road surface is achieved.
And step 4, respectively taking the gray level image, the processed laser scattering brightness image and the edge image of the depth image as three color channels for fusion to obtain a multi-channel fused foreign matter image.
Optionally, the three color channels are a G (green) channel, a B (blue) channel, and an R (red) channel, respectively. Specifically, the gray image is used as a G (green) channel, the processed laser scattering brightness image is used as a B (blue) channel, the edge image of the depth image is used as an R (red) channel, and image fusion is carried out to form a pseudo-color image. The fused image has three channel characteristics at the same time, the foreign matter edge is more obvious, as shown in fig. 8, the actual image is a colored image, and a gray scale image is changed.
In the embodiment, the three channels of fused images are obtained through fusion, and finally the fused images are detected by using a detection algorithm to obtain a detection result, so that the detection efficiency and reliability are improved to a certain extent.
In step 5, after the multi-channel fused foreign body image is obtained, real-time detection of the foreign body in the runway is carried out by adopting an improved Yolov5(You Only Look one) algorithm. The improved Yolov5 is specifically that an attention mechanism module CBAM is connected and arranged at the output end of a Backbone network (Backbone network). And inputting the foreign body images fused by multiple channels into the well-trained Yolov5 network by utilizing the well-trained Yolov5 network to obtain the positions and the types of the foreign bodies.
The Yolov5 network is adopted for training, image features are extracted, an optimal detection model is generated, real-time detection of foreign matters in the runway is achieved, and the specific method for training the Yolov5 network is as follows:
and step 51, preprocessing the images of the constructed training set.
Specifically, preprocessing operations such as Mosaic data enhancement, adaptive anchor frame calculation, adaptive picture scaling and the like are performed on the image at the input end.
And step 52, extracting the features of the preprocessed image.
Optionally, an attention mechanism CBAM is incorporated into the backhaul, and Conv, C3, CBAM, and SPPF may be used in the backhaul to perform image feature extraction.
Wherein, the Backbone network is Backbone, Conv is a combination of convolution, batch normalization and an activation function, and C3 is a network structure comprising 3 standard convolution layers and a plurality of Bottleneck modules; the bottle neck structure is Bottleneeck; CBAM is a Convolutional Block Attention Module, which is a new Convolutional Attention Module; SPPF is a fast spatial pyramid pooling.
And step 53, performing feature fusion on the extracted features: and the high-level features are fused with the low-level features through upsampling, and the low-level features are fused with the high-level features through downsampling, so that feature maps with different sizes all contain strong semantic information and strong position information.
Optionally, features extracted from the backhaul network may be fused in the Neck of the Yolov5 network by using FPN + PAN.
The high-level features comprise stronger semantic information and weaker position information, the low-level features comprise stronger position information and weaker semantic information, the FPN transmits the high-level semantic information to the low level to enhance semantic expression on multiple scales, and the PAN transmits the low-level position information to the high level to enhance positioning capability on multiple scales. The FPN fuses the high-level features with the low-level features through up-sampling, and the PAN fuses the low-level features with the high-level features through down-sampling, so that feature maps of different sizes all contain strong semantic information and strong position information.
The FPN structure performs up-sampling from top to bottom, so that the bottom layer feature map contains foreign matter strong semantic information; PAN structure downsamples from bottom to top, makes the top layer characteristic contain the strong positional information of foreign matter, and two characteristics fuse at last, make the not unidimensional characteristic map all contain strong foreign matter semantic information and strong positional information, promote network detection ability.
The FPN feature pyramid network (featurepoindnetwork), the PAN path aggregation network (pathaggregation network), and the Neck are Neck networks.
Step 54, training by taking the feature after the fusion processing as the input of the Yolov5 network and the type and position of the foreign matters on the runway as the output to reach the set training times to obtain a trained Yolov5 model
The set number of training times can be set as required, and 100 training rounds can be set.
And acquiring a multi-channel fused foreign body image to be identified, inputting the image into the trained Yolov5 model for detection, and obtaining and marking the type and position of the foreign body on the runway.
The types of the foreign materials may include tools, sundries, parts, etc., wherein the tools may be maintenance tools of the aircraft such as pliers, wrenches, etc. The sundries can be stones, plants and the like, and parts such as parts on vehicles or airplanes, such as bolts, nuts and the like.
In order to explain the detection effect, the foreign object image is shot for detection, the output identification result is shown in fig. 9, and various types of foreign objects including pliers, screws, stones, vegetation and plastic products can achieve a good identification effect, are not limited by light, and improve the reliability and efficiency of foreign object identification.
Example 2
Based on embodiment 1, this embodiment provides a FOD detection system of multi-channel visual information fusion, including:
an acquisition module: the method comprises the steps of configuring to be used for obtaining an image to be detected, wherein the image to be detected comprises a gray level image, a laser scattering brightness image and a depth image;
the laser scattering brightness image processing module: the laser scattering brightness image processing device is configured to extract the foreign matter edge of the laser scattering brightness image to obtain a processed laser scattering brightness image;
a depth image processing module: the image processing method comprises the steps of performing edge extraction processing on a depth image to obtain an edge image of the depth image;
a fusion module: the image processing device is configured to respectively fuse the gray level image, the processed laser scattering brightness image and the edge image of the depth image as three color channels to obtain a multi-channel fused foreign matter image;
a detection module: the system is configured to detect the foreign matters according to the multi-channel fused foreign matter image, and obtain the types and the positions of the foreign matters.
It should be noted here that, each module in this embodiment corresponds to each step in embodiment 1, and the specific implementation process is the same, which is not described here again.
Example 3
Based on embodiment 1, this embodiment provides a FOD detection system of multi-channel visual information fusion, including: an image acquisition device and a processor;
the image acquisition device is a vehicle-mounted line scanning structured light camera and is used for acquiring an image to be detected;
the processor is configured to perform a method for multi-channel visual information fusion FOD detection as described in embodiment 1.
Example 4
The present embodiment provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of embodiment 1.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.
Claims (10)
1. A FOD detection method of multi-channel visual information fusion is characterized by comprising the following steps:
acquiring an image to be detected, wherein the image to be detected comprises a gray level image, a laser scattering brightness image and a depth image;
extracting foreign matter edges of the laser scattering brightness image to obtain a processed laser scattering brightness image;
performing edge extraction processing on the depth image to obtain an edge image of the depth image;
respectively taking the gray level image, the processed laser scattering brightness image and the edge image of the depth image as three color channels for fusion to obtain a multi-channel fused foreign matter image;
and (4) carrying out foreign matter detection on the multi-channel fused foreign matter image to obtain the type and position of the foreign matter.
2. The method of claim 1, wherein the FOD detection method comprises the following steps: and acquiring an image by adopting a vehicle-mounted line scanning structured light camera to obtain an image to be detected.
3. The method of claim 1, wherein the FOD detection method comprises the following steps: and extracting foreign matter edges of the laser scattering brightness image by adopting a Laplace operator, and performing convolution operation on the laser scattering brightness image and the Laplace operator with a set size to obtain a processed laser scattering brightness image.
4. The method of claim 1, wherein the FOD detection method comprises the following steps: and solving the edge of the depth image by using a Sobel operator.
5. The method of claim 1, wherein the FOD detection method comprises the following steps: the method for solving the edge of the depth image by adopting the Sobel operator comprises the following steps:
extracting an edge in the x direction, performing convolution operation on the original image and an x-direction Sobel edge detection operator, and extracting the edge in the x direction;
extracting the edge in the y direction, performing convolution operation on the original image and a y-direction Sobel edge detection operator, and extracting the edge in the y direction;
and integrating the edge information in the x direction and the y direction to obtain the edge of the whole image and obtain an edge image in the depth direction.
6. The method of claim 1, wherein the FOD detection method comprises the following steps: in the fusion of the three color channels, the three color channels are respectively a G channel, a B channel and an R channel;
or in the three-color channel fusion, the gray image is used as a G channel, the processed laser scattering brightness image is used as a B channel, and the edge image of the depth image is used as an R channel, so that the image fusion is carried out to form a pseudo-color image.
7. The method of claim 1, wherein the FOD detection method comprises the following steps: the foreign matter detection method specifically comprises the step of carrying out real-time detection on the foreign matters of the runway through a Yolov5 algorithm.
8. A multi-channel visual information fused FOD detection system, comprising:
an acquisition module: the method comprises the steps of obtaining an image to be detected, wherein the image to be detected comprises a gray level image, a laser scattering brightness image and a depth image;
the laser scattering brightness image processing module: the laser scattering brightness image processing device is configured to extract the foreign matter edge of the laser scattering brightness image to obtain a processed laser scattering brightness image;
a depth image processing module: the image processing method comprises the steps of performing edge extraction processing on a depth image to obtain an edge image of the depth image;
a fusion module: the image processing device is configured to respectively fuse the gray level image, the processed laser scattering brightness image and the edge image of the depth image as three color channels to obtain a multi-channel fused foreign matter image;
a detection module: the system is configured to detect the foreign matters according to the multi-channel fused foreign matter image, and obtain the types and the positions of the foreign matters.
9. A multi-channel visual information fused FOD detection system, comprising: an image acquisition device and a processor;
the image acquisition device is a vehicle-mounted line scanning structured light camera and is used for acquiring an image to be detected;
the processor is configured to perform the method of FOD detection of multi-channel visual information fusion of any of claims 1-7.
10. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the steps of the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210641240.4A CN114973216A (en) | 2022-06-08 | 2022-06-08 | FOD detection method and system for multi-channel visual information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210641240.4A CN114973216A (en) | 2022-06-08 | 2022-06-08 | FOD detection method and system for multi-channel visual information fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114973216A true CN114973216A (en) | 2022-08-30 |
Family
ID=82959381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210641240.4A Pending CN114973216A (en) | 2022-06-08 | 2022-06-08 | FOD detection method and system for multi-channel visual information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114973216A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115588024A (en) * | 2022-11-25 | 2023-01-10 | 东莞市兆丰精密仪器有限公司 | Artificial intelligence-based complex industrial image edge extraction method and device |
-
2022
- 2022-06-08 CN CN202210641240.4A patent/CN114973216A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115588024A (en) * | 2022-11-25 | 2023-01-10 | 东莞市兆丰精密仪器有限公司 | Artificial intelligence-based complex industrial image edge extraction method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110197231B (en) | Bird condition detection equipment and identification method based on visible light and infrared light image fusion | |
CN109345547B (en) | Traffic lane line detection method and device based on deep learning multitask network | |
CN109389046B (en) | All-weather object identification and lane line detection method for automatic driving | |
CN112308826B (en) | Bridge structure surface defect detection method based on convolutional neural network | |
CN107679495B (en) | Detection method for movable engineering vehicles around power transmission line | |
CN101236648B (en) | Fog isolation and rejection filter | |
CN108805050B (en) | Electric wire detection method based on local binary pattern | |
CN110346699A (en) | Insulator arc-over information extracting method and device based on ultraviolet image processing technique | |
CN114973216A (en) | FOD detection method and system for multi-channel visual information fusion | |
CN108198417A (en) | A kind of road cruising inspection system based on unmanned plane | |
Shi et al. | Weather recognition based on edge deterioration and convolutional neural networks | |
CN115330676A (en) | Method, system and equipment for detecting foreign matters on airfield runway based on convolutional neural network | |
CN114089786A (en) | Autonomous inspection system based on unmanned aerial vehicle vision and along mountain highway | |
Liu et al. | Review of data analysis in vision inspection of power lines with an in-depth discussion of deep learning technology | |
He et al. | Obstacle detection in dangerous railway track areas by a convolutional neural network | |
CN113128476A (en) | Low-power consumption real-time helmet detection method based on computer vision target detection | |
CN109325911B (en) | Empty base rail detection method based on attention enhancement mechanism | |
Ogunrinde et al. | A review of the impacts of defogging on deep learning-based object detectors in self-driving cars | |
CN109902730B (en) | Power transmission line broken strand detection method based on deep learning | |
CN109359545B (en) | Cooperative monitoring method and device under complex low-altitude environment | |
CN115063725A (en) | Airplane skin defect identification system based on multi-scale self-adaptive SSD algorithm | |
CN112926354A (en) | Deep learning-based lane line detection method and device | |
CN107147877A (en) | FX night fog day condition all-weather colorful video imaging system and its construction method | |
CN113486866A (en) | Visual analysis method and system for airport bird identification | |
CN115984672B (en) | Detection method and device for small target in high-definition image based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |