CN113239845A - Infrared target detection method and system for embedded platform - Google Patents
Infrared target detection method and system for embedded platform Download PDFInfo
- Publication number
- CN113239845A CN113239845A CN202110579876.6A CN202110579876A CN113239845A CN 113239845 A CN113239845 A CN 113239845A CN 202110579876 A CN202110579876 A CN 202110579876A CN 113239845 A CN113239845 A CN 113239845A
- Authority
- CN
- China
- Prior art keywords
- convolution
- model
- infrared
- network
- yolov4
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 113
- 238000010586 diagram Methods 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 7
- 238000000034 method Methods 0.000 claims description 7
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 238000005286 illumination Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an infrared target detection method and system facing an embedded platform, which comprises the steps of obtaining an image to be identified; the standard convolution in the YOLOv4 algorithm is replaced by the deep separable convolution, and the YOLOv4 network model is improved; extracting a feature map of the image to be recognized based on the improved YOLOv4 model; and detecting the multi-scale characteristic diagram based on a predefined infrared detection model to obtain a target detection result corresponding to the characteristic diagram. The scheme improves the detection speed while meeting the computing capability of the embedded platform.
Description
Technical Field
The invention relates to the technical field of computer communication, in particular to an infrared target detection method and system for an embedded platform.
Background
The target detection is an important research content in the field of computer vision, and with the rapid development of deep learning, new detection algorithms are continuously emerging under the visible light environment, wherein the detection algorithms are mainly divided into a Two-stage detection model and a One-stage detection model. The Two-stage detection model mainly comprises an R-CNN series algorithm, and the detection precision is greatly improved by generating a suggestion region; the One-stage detection model mainly comprises an SSD (Single Shot MultiBox Detector) series, a YOLO (You Only Look One) series and the like, and adopts a One-step framework of global regression and classification, so that the detection speed is greatly improved while certain precision is sacrificed. The two detection models are based on the preset anchor points, and although the detection precision and speed are greatly improved, the optimization innovation of the detection models is also hindered by the limitation of the self preset anchor points. In order to solve the problems, recently, an Anchor-free detection model is proposed, such as CornerNet and Centeret, which can complete target detection without presetting Anchor points, and brings a new idea of innovation of the target detection model.
The target detection algorithm under the visible light environment is very dependent on the condition of sufficient illumination, and cannot meet the target detection requirement under the scene of insufficient illumination. The infrared imaging system can realize imaging based on reflection of infrared light by a target and heat radiation of the target, well highlight the target, is slightly influenced by illumination intensity conditions, and can cover most scenes with insufficient illumination. However, the traditional infrared image detection algorithm is a method for manually designing and extracting the target contour, an accurate template needs to be designed for extracting the features of the target contour, and the conditions of missing detection and inaccurate positioning of the target can be caused for a complex and variable background, so that the requirements on experience of designers are high, and the robustness is poor. The infrared target detection algorithm based on deep learning has high detection accuracy, but needs a high-power-consumption GPU computing platform and cannot meet the real-time requirement of an embedded platform.
Disclosure of Invention
In order to solve the problems, the invention provides an infrared target detection method and system facing an embedded platform, which are based on a YOLOv4 model, replace common convolution in an original network by using deep separable convolution, and improve the detection capability of the network by using a frame thought of YOLOv4, so that the robustness of a detection algorithm is improved.
In order to achieve the purpose, the technical scheme of the invention is as follows:
an infrared target detection method facing an embedded platform, the method comprising:
acquiring an image to be identified;
the standard convolution in the YOLOv4 algorithm is replaced by the deep separable convolution, and the YOLOv4 network model is improved;
extracting a feature map of the image to be recognized based on the improved YOLOv4 model;
and detecting the multi-scale characteristic diagram based on a predefined infrared detection model to obtain a target detection result corresponding to the characteristic diagram.
Preferably, the replacing the standard convolution in the YOLOv4 algorithm by the depth separable convolution improves the YOLOv4 network model by:
decomposing the standard convolution in the YOLOv4 network model into a depth convolution and a point-by-point convolution in a depth separable convolution; definition a is Dk×DkStandard convolution of x M, b is Dk×DkA deep convolution by x 1, c being a point-by-point convolution by 1 x M; if the standard convolution calculation is:
Gs=Dk×Dk×M×Df×Df+M×N×Df×Df
the computation of the depth separable convolution is then:
Gs=Dk×Dk×M×N×Df×Df。
preferably, the predefined infrared detection model includes: initializing parameters according to the model structure to obtain an initial infrared model, and transmitting the parameters of each convolution layer in the visible light target detection model to the convolution layer corresponding to the initial infrared model;
and extracting a pre-training model for detecting the infrared object from the visible light target detection model, and finely adjusting the pre-training model in the actually acquired infrared data set to obtain the infrared detection model based on deep learning.
Preferably, the detecting the feature map based on the predefined infrared detection model includes: inputting the extracted multi-size characteristic map into a CSPDarknet53 backbone network of an infrared detection model; wherein the CSPDarknet53 is obtained by adding CSP module on the basis of Darknet53 backbone network of YOLOv 3;
dividing the shallow feature mapping into two parts, and merging through a cross-layer structure; and transmitting semantic information of the high-level features to a low-level network through a PANet network, fusing the semantic information with high-resolution information of the shallow-level features, transmitting bottom-level information to the high-level network, and predicting feature graphs fusing different layers to obtain fusion features of different scales.
An embedded platform oriented infrared target detection system, the system comprising:
the acquisition module is used for acquiring an image to be identified;
the improvement module is used for replacing the standard convolution in the YOLOv4 algorithm by using the depth separable convolution to improve the YOLOv4 network model;
the feature map extraction module is used for extracting a feature map of the image to be recognized based on the improved YOLOv4 model;
and the target detection module is used for detecting the multi-scale characteristic diagram based on a predefined infrared detection model to obtain a target detection result corresponding to the characteristic diagram.
Preferably, the improvement module comprises:
a decomposition module unit for decomposing the standard convolution in the YOLOv4 network model into a depth convolution and a point-by-point convolution in a depth separable convolution;
a replacement unit for defining a as Dk×DkStandard convolution of x M, b is Dk×DkA deep convolution by x 1, c being a point-by-point convolution by 1 x M; if the standard convolution calculation is:
Gs=Dk×Dk×M×Df×Df+M×N×Df×Df
the computation of the depth separable convolution is then:
Gs=Dk×Dk×M×N×Df×Df。
preferably, the object detection module includes: the pre-defining unit is used for initializing parameters according to the model structure to obtain an initial infrared model and transmitting the parameters of each convolution layer in the visible light target detection model to the convolution layer corresponding to the initial infrared model;
and extracting a pre-training model for detecting the infrared object from the visible light target detection model, and finely adjusting the pre-training model in the actually acquired infrared data set to obtain the infrared detection model based on deep learning.
Further, the object detection module further comprises: the detection unit is used for inputting the multi-size characteristic diagram extracted by the characteristic diagram extraction module into a CSPDarknet53 backbone network of the infrared detection model; wherein the CSPDarknet53 is obtained by adding CSP module on the basis of Darknet53 backbone network of YOLOv 3;
the fusion unit is used for dividing the shallow feature mapping into two parts and merging the two parts through a cross-layer hierarchy structure; transmitting semantic information of high-level features to a low-level network through a PANet network, fusing the semantic information with high-resolution information of shallow-level features, and transmitting bottom-level information to the high-level network;
and the prediction unit is used for predicting the feature maps fused with different layers to obtain fused features with different scales.
The invention has the beneficial effects that:
the invention provides an infrared target detection method and system facing an embedded platform, which comprises the steps of obtaining an image to be identified; the standard convolution in the YOLOv4 algorithm is replaced by the deep separable convolution, and the YOLOv4 network model is improved; the standard convolution in the YOLOv4 algorithm is replaced by the depth separable convolution, so that the detection speed is improved while the computing capability of an embedded platform is met.
Extracting a feature map of the image to be recognized based on the improved YOLOv4 model; and detecting the multi-scale characteristic diagram based on a predefined infrared detection model to obtain a target detection result corresponding to the characteristic diagram. The embedded platform realizes real-time detection of the multi-source image, and ensures that the decision-level fusion target detection model realizes good balance of precision and speed on the embedded platform. The method does not need to manually design an accurate template to extract the target contour features, solves the problems of missed detection and inaccurate positioning of the target caused by a complex and changeable background, and has good robustness.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 is a flowchart of an infrared target detection method for an embedded platform according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a depth separable convolution according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
The specific embodiment of the present invention provides an embedded platform-oriented infrared target detection method as shown in fig. 1, where the method includes:
s1, acquiring an image to be recognized;
s2, replacing standard convolution in the YOLOv4 algorithm by using depth separable convolution to improve a YOLOv4 network model;
s3, extracting a feature map of the image to be recognized based on the improved YOLOv4 model;
s4, detecting the multi-scale feature map based on the predefined infrared detection model to obtain the target detection result corresponding to the feature map.
In step S2, the modifying the YOLOv4 network model by replacing the standard convolution in the YOLOv4 algorithm with the depth separable convolution includes:
decomposing the standard convolution in the YOLOv4 network model into a depth convolution and a point-by-point convolution in a depth separable convolution; definition a is Dk×DkStandard convolution of x M, b is Dk×DkA deep convolution by x 1, c being a point-by-point convolution by 1 x M; if the standard convolution calculation is:
Gs=Dk×Dk×M×Df×Df+M×N×Df×Df
the computation of the depth separable convolution is then:
Gs=Dk×Dk×M×N×Df×Df。
in step S4, the predefined infrared detection model includes: initializing parameters according to the model structure to obtain an initial infrared model, and transmitting the parameters of each convolution layer in the visible light target detection model to the convolution layer corresponding to the initial infrared model;
and extracting a pre-training model for detecting the infrared object from the visible light target detection model, and finely adjusting the pre-training model in the actually acquired infrared data set to obtain the infrared detection model based on deep learning.
Wherein, infrared detection model detection principle includes:
the infrared detection model continues to use the idea of the YOLO target, the whole image is used as the input of the whole network, and the target identification and positioning are fused together based on the regression idea. CSPDarkNet is a feature extraction network used by YOLOv4, absorbs the advantages of CSPNet, adds a CSP module in a backbone network DarkNet53 of YOLOv3, divides shallow feature mapping into two parts, and then merges through a cross-hierarchy structure, so that the detection accuracy is maintained while the network is lightened, the calculation bottleneck is reduced, and the memory cost is reduced. In addition, YOLOv4 absorbs the advantages of PANet, spreads semantic information of high-level features to a low-level network and fuses with high-resolution information of shallow-level features, and the detection effect of small target objects is improved; and then, the information of the bottom layer is transmitted to a high-layer network, and finally, prediction is carried out by using a feature map fusing different layers.
The infrared network model decomposes the standard convolution into depth convolution kernels in the depth separable convolution point by point, and the parameter quantity and the calculated quantity of the detection model are greatly reduced. The operational decomposition is shown in fig. 2.
In FIG. 2, a is Dk×DkStandard convolution of x M, b is Dk×DkX 1, and c is a 1 x M point-by-point convolution.
The standard convolution calculated quantity is:
Gs=Dk×Dk×M×N×Df×Df
the depth separable convolution calculated quantity is:
Gs=Dk×Dk×M×Df×Df+M×N×Df×Df
comparing the depth separable convolution with the standard convolution calculation yields:
when D is presentkAt 3, the depth separable convolution is approximately 1/9 of the standard convolution.
Based on the above, the YOLOv4 network model is improved, the standard convolution is replaced by the deep separable convolution, and the calculation parameters and the size of the model are reduced.
Due to different types of detected targets, classification layers of a visible light target detection model and an infrared target detection model need to be redesigned, and feature graphs are extracted for prediction in the forward inference of the network, wherein the sizes of the feature graphs are respectively 13 × 18 and 26 × 18. The pre-training model of the visible light target detection model is generally established in a model trained by ImageNet and MS COCO, and the model is rapidly converged by a model parameter transmission mode.
The design of the infrared pre-training model comprises the steps of firstly carrying out parameter initialization according to a model structure to obtain an initial model, and meanwhile, transmitting parameters of each convolution layer in the visible light model to the convolution layer corresponding to the infrared model in order to fully utilize the target detection capability of the visible light model, so that parameter sharing is realized. And extracting a pre-training model for infrared object detection from the visible light detection model, and performing fine adjustment on the infrared data set to obtain an infrared detection model based on deep learning.
In step S4, the detecting the feature map based on the predefined infrared detection model includes: inputting the extracted multi-size characteristic map into a CSPDarknet53 backbone network of an infrared detection model; wherein the CSPDarknet53 is obtained by adding CSP module on the basis of Darknet53 backbone network of YOLOv 3;
dividing the shallow feature mapping into two parts, and merging through a cross-layer structure; and transmitting semantic information of the high-level features to a low-level network through a PANet network, fusing the semantic information with high-resolution information of the shallow-level features, transmitting bottom-level information to the high-level network, and predicting feature graphs fusing different layers to obtain fusion features of different scales.
And (3) analyzing an experimental result:
the experimental hardware was configured as an embedded platform Jetson TX2 for NVIDIA, and the experimental data used FLIR corporation vehicle thermal infrared data set. The speed and the precision of the infrared detection model are tested, and the test result is shown in table 1 by comparing with other detection models.
TABLE 1 comparison of different models
Example 2:
based on the same technical concept, the invention also provides an infrared target detection system facing the embedded platform, which comprises:
the acquisition module is used for acquiring an image to be identified;
the improvement module is used for replacing the standard convolution in the YOLOv4 algorithm by using the depth separable convolution to improve the YOLOv4 network model;
the feature map extraction module is used for extracting a feature map of the image to be recognized based on the improved YOLOv4 model;
and the target detection module is used for detecting the multi-scale characteristic diagram based on a predefined infrared detection model to obtain a target detection result corresponding to the characteristic diagram.
Wherein the improvement module comprises:
a decomposition module unit for decomposing the standard convolution in the YOLOv4 network model into a depth convolution and a point-by-point convolution in a depth separable convolution;
a replacement unit for defining a as Dk×DkStandard convolution of x M, b is Dk×DkA deep convolution by x 1, c being a point-by-point convolution by 1 x M; if the standard convolution calculation is:
Gs=Dk×Dk×M×Df×Df+M×N×Df×Df
the computation of the depth separable convolution is then:
Gs=Dk×Dk×M×N×Df×Df。
the target detection module includes: the pre-defining unit is used for initializing parameters according to the model structure to obtain an initial infrared model and transmitting the parameters of each convolution layer in the visible light target detection model to the convolution layer corresponding to the initial infrared model;
and extracting a pre-training model for detecting the infrared object from the visible light target detection model, and finely adjusting the pre-training model in the actually acquired infrared data set to obtain the infrared detection model based on deep learning.
The target detection module further comprises: the detection unit is used for inputting the multi-size characteristic diagram extracted by the characteristic diagram extraction module into a CSPDarknet53 backbone network of the infrared detection model; wherein the CSPDarknet53 is obtained by adding CSP module on the basis of Darknet53 backbone network of YOLOv 3;
the fusion unit is used for dividing the shallow feature mapping into two parts and merging the two parts through a cross-layer hierarchy structure; transmitting semantic information of high-level features to a low-level network through a PANet network, fusing the semantic information with high-resolution information of shallow-level features, and transmitting bottom-level information to the high-level network;
and the prediction unit is used for predicting the feature maps fused with different layers to obtain fused features with different scales.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.
Claims (8)
1. An infrared target detection method facing an embedded platform is characterized by comprising the following steps:
acquiring an image to be identified;
the standard convolution in the YOLOv4 algorithm is replaced by the deep separable convolution, and the YOLOv4 network model is improved;
extracting a feature map of the image to be recognized based on the improved YOLOv4 model;
and detecting the multi-scale characteristic diagram based on a predefined infrared detection model to obtain a target detection result corresponding to the characteristic diagram.
2. The method of claim 1, wherein the replacing of the standard convolution in the YOLOv4 algorithm with the deep separable convolution improves a YOLOv4 network model comprising:
decomposing the standard convolution in the YOLOv4 network model into a depth convolution and a point-by-point convolution in a depth separable convolution; definition a is Dk×DkStandard convolution of x M, b is Dk×DkA deep convolution by x 1, c being a point-by-point convolution by 1 x M; if the standard convolution calculation is:
Gs=Dk×Dk×M×Df×Df+M×N×Df×Df
the computation of the depth separable convolution is then:
Gs=Dk×Dk×M×N×Df×Df。
3. the method of claim 1, wherein the pre-defining of the infrared detection model comprises: initializing parameters according to the model structure to obtain an initial infrared model, and transmitting the parameters of each convolution layer in the visible light target detection model to the convolution layer corresponding to the initial infrared model;
and extracting a pre-training model for detecting the infrared object from the visible light target detection model, and finely adjusting the pre-training model in the actually acquired infrared data set to obtain the infrared detection model based on deep learning.
4. The method of claim 1, wherein detecting the feature map based on the predefined infrared detection model comprises: inputting the extracted multi-size characteristic map into a CSPDarknet53 backbone network of an infrared detection model; wherein the CSPDarknet53 is obtained by adding CSP module on the basis of Darknet53 backbone network of YOLOv 3;
dividing the shallow feature mapping into two parts, and merging through a cross-layer structure; and transmitting semantic information of the high-level features to a low-level network through a PANet network, fusing the semantic information with high-resolution information of the shallow-level features, transmitting bottom-level information to the high-level network, and predicting feature graphs fusing different layers to obtain fusion features of different scales.
5. An infrared target detection system oriented to an embedded platform, the system comprising:
the acquisition module is used for acquiring an image to be identified;
the improvement module is used for replacing the standard convolution in the YOLOv4 algorithm by using the depth separable convolution to improve the YOLOv4 network model;
the feature map extraction module is used for extracting a feature map of the image to be recognized based on the improved YOLOv4 model;
and the target detection module is used for detecting the multi-scale characteristic diagram based on a predefined infrared detection model to obtain a target detection result corresponding to the characteristic diagram.
6. The system of claim 5, wherein the improvement module comprises:
a decomposition module unit for decomposing the standard convolution in the YOLOv4 network model into a depth convolution and a point-by-point convolution in a depth separable convolution;
a replacement unit for defining a as Dk×DkStandard convolution of x M, b is Dk×DkA deep convolution by x 1, c being a point-by-point convolution by 1 x M; if the standard convolution calculation is:
Gs=Dk×Dk×M×Df×Df+M×N×Df×Df
the computation of the depth separable convolution is then:
Gs=Dk×Dk×M×N×Df×Df。
7. the system of claim 5, wherein the object detection module comprises: the pre-defining unit is used for initializing parameters according to the model structure to obtain an initial infrared model and transmitting the parameters of each convolution layer in the visible light target detection model to the convolution layer corresponding to the initial infrared model;
and extracting a pre-training model for detecting the infrared object from the visible light target detection model, and finely adjusting the pre-training model in the actually acquired infrared data set to obtain the infrared detection model based on deep learning.
8. The system of claim 7, wherein the object detection module further comprises: the detection unit is used for inputting the multi-size characteristic diagram extracted by the characteristic diagram extraction module into a CSPDarknet53 backbone network of the infrared detection model; wherein the CSPDarknet53 is obtained by adding CSP module on the basis of Darknet53 backbone network of YOLOv 3;
the fusion unit is used for dividing the shallow feature mapping into two parts and merging the two parts through a cross-layer hierarchy structure; transmitting semantic information of high-level features to a low-level network through a PANet network, fusing the semantic information with high-resolution information of shallow-level features, and transmitting bottom-level information to the high-level network;
and the prediction unit is used for predicting the feature maps fused with different layers to obtain fused features with different scales.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110579876.6A CN113239845A (en) | 2021-05-26 | 2021-05-26 | Infrared target detection method and system for embedded platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110579876.6A CN113239845A (en) | 2021-05-26 | 2021-05-26 | Infrared target detection method and system for embedded platform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113239845A true CN113239845A (en) | 2021-08-10 |
Family
ID=77139130
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110579876.6A Pending CN113239845A (en) | 2021-05-26 | 2021-05-26 | Infrared target detection method and system for embedded platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113239845A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108288075A (en) * | 2018-02-02 | 2018-07-17 | 沈阳工业大学 | A kind of lightweight small target detecting method improving SSD |
CN109711449A (en) * | 2018-12-20 | 2019-05-03 | 北京以萨技术股份有限公司 | A kind of image classification algorithms based on full convolutional network |
CN110674878A (en) * | 2019-09-26 | 2020-01-10 | 苏州航韧光电技术有限公司 | Target detection method and device for dual-mode decision-level image fusion |
CN111832513A (en) * | 2020-07-21 | 2020-10-27 | 西安电子科技大学 | Real-time football target detection method based on neural network |
CN112101434A (en) * | 2020-09-04 | 2020-12-18 | 河南大学 | Infrared image weak and small target detection method based on improved YOLO v3 |
CN112347943A (en) * | 2020-11-09 | 2021-02-09 | 哈尔滨理工大学 | Anchor optimization safety helmet detection method based on YOLOV4 |
-
2021
- 2021-05-26 CN CN202110579876.6A patent/CN113239845A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108288075A (en) * | 2018-02-02 | 2018-07-17 | 沈阳工业大学 | A kind of lightweight small target detecting method improving SSD |
CN109711449A (en) * | 2018-12-20 | 2019-05-03 | 北京以萨技术股份有限公司 | A kind of image classification algorithms based on full convolutional network |
CN110674878A (en) * | 2019-09-26 | 2020-01-10 | 苏州航韧光电技术有限公司 | Target detection method and device for dual-mode decision-level image fusion |
CN111832513A (en) * | 2020-07-21 | 2020-10-27 | 西安电子科技大学 | Real-time football target detection method based on neural network |
CN112101434A (en) * | 2020-09-04 | 2020-12-18 | 河南大学 | Infrared image weak and small target detection method based on improved YOLO v3 |
CN112347943A (en) * | 2020-11-09 | 2021-02-09 | 哈尔滨理工大学 | Anchor optimization safety helmet detection method based on YOLOV4 |
Non-Patent Citations (1)
Title |
---|
唐聪等: "基于深度学习的红外与可见光决策级融合跟踪", 《激光与光电子学进展》, vol. 56, no. 07, pages 217 - 224 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11341366B2 (en) | Cross-modality processing method and apparatus, and computer storage medium | |
Luo et al. | Aircraft target detection in remote sensing images based on improved YOLOv5 | |
US10074161B2 (en) | Sky editing based on image composition | |
CN108052911A (en) | Multi-modal remote sensing image high-level characteristic integrated classification method based on deep learning | |
CN111080645A (en) | Remote sensing image semi-supervised semantic segmentation method based on generating type countermeasure network | |
CN113963240B (en) | Comprehensive detection method for multi-source remote sensing image fusion target | |
US20150331929A1 (en) | Natural language image search | |
CN110084299B (en) | Target detection method and device based on multi-head fusion attention | |
CN106295613A (en) | A kind of unmanned plane target localization method and system | |
CN111815577A (en) | Method, device, equipment and storage medium for processing safety helmet wearing detection model | |
CN111460999A (en) | Low-altitude aerial image target tracking method based on FPGA | |
Farady et al. | Mask classification and head temperature detection combined with deep learning networks | |
CN113255521A (en) | Dual-mode target detection method and system for embedded platform | |
CN116824413A (en) | Aerial image target detection method based on multi-scale cavity convolution | |
Zhang et al. | MMFNet: Forest fire smoke detection using multiscale convergence coordinated pyramid network with mixed attention and fast-robust NMS | |
US20230185440A1 (en) | Synthetic image data generation using auto-detected image parameters | |
Ma et al. | Dynamic gesture contour feature extraction method using residual network transfer learning | |
Shen et al. | Infrared object detection method based on DBD-YOLOv8 | |
CN113239845A (en) | Infrared target detection method and system for embedded platform | |
Chirgaiya et al. | Tiny object detection model based on competitive multi-layer neural network (TOD-CMLNN) | |
Li et al. | Attention Mechanism Cloud Detection With Modified FCN for Infrared Remote Sensing Images | |
Qin et al. | An end-to-end traffic visibility regression algorithm | |
KR102474436B1 (en) | An apparatus for processing video and image search of natural languages based on caption data and a method for operating it | |
KR20210041856A (en) | Method and apparatus for generating learning data required to learn animation characters based on deep learning | |
CN106469437B (en) | Image processing method and image processing apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 266000 Room 302, building 3, Office No. 77, Lingyan Road, Huangdao District, Qingdao, Shandong Province Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd. Address before: 266000 3rd floor, building 3, optical valley software park, 396 Emeishan Road, Huangdao District, Qingdao City, Shandong Province Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information |