CN110021036B - Infrared target detection method and device, computer equipment and storage medium - Google Patents

Infrared target detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110021036B
CN110021036B CN201910296542.0A CN201910296542A CN110021036B CN 110021036 B CN110021036 B CN 110021036B CN 201910296542 A CN201910296542 A CN 201910296542A CN 110021036 B CN110021036 B CN 110021036B
Authority
CN
China
Prior art keywords
learning model
training
detected
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910296542.0A
Other languages
Chinese (zh)
Other versions
CN110021036A (en
Inventor
陈洛洋
刘铮
毛红霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Environmental Features
Original Assignee
Beijing Institute of Environmental Features
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Environmental Features filed Critical Beijing Institute of Environmental Features
Priority to CN201910296542.0A priority Critical patent/CN110021036B/en
Publication of CN110021036A publication Critical patent/CN110021036A/en
Application granted granted Critical
Publication of CN110021036B publication Critical patent/CN110021036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an infrared target detection method, which comprises the following steps: acquiring a gray image to be detected; inputting the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images; fusing the at least two corresponding preprocessed images to obtain a fused image; and extracting the target to be detected from the fused image. By adopting training learning models with different visual angles, various interference targets in the gray level image to be detected are removed, and the detection accuracy rate of the long-distance weak and small targets can be effectively improved.

Description

Infrared target detection method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an infrared target detection method, apparatus, computer device, and storage medium.
Background
Infrared target detection is a core technology of Infrared signal processing, and is applied to a plurality of fields such as an Infrared search and track (IRST) system, an accurate guidance system, a target monitoring system, a satellite remote sensing system and the like. In recent years, intelligent information processing method based on visual attention mechanism becomes a great research hotspot
The traditional detection method of the infrared moving target comprises methods such as a background difference method, an optical flow method and a frame difference method, but for a long-distance weak infrared target, the interference of noises such as air flow and cloud is easy to happen, the traditional method is difficult to effectively detect, and the accuracy is low.
Disclosure of Invention
The invention aims to provide an infrared target detection method, an infrared target detection device, computer equipment and a readable storage medium, which can effectively improve the detection accuracy rate aiming at a long-distance weak and small target.
The purpose of the invention is realized by the following technical scheme:
a method of infrared target detection, the method comprising:
acquiring a gray image to be detected;
inputting the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images;
fusing the at least two corresponding preprocessed images to obtain a fused image;
and extracting the target to be detected from the fused image.
In one embodiment, the learning models for the at least two perspectives include a training compressed perception learning model, a training subspace learning model, and a training attention learning model.
In an embodiment, before the step of inputting the grayscale image to be detected into the training learning models of at least two viewing angles respectively to obtain at least two corresponding preprocessed images, the method further includes:
and acquiring the training compressed sensing learning model, the training subspace learning model and the training attention learning model.
In one embodiment, the step of obtaining the training compressed sensing learning model, the training subspace learning model and the training attention learning model includes:
acquiring a plurality of sample gray level images;
and respectively inputting the plurality of sample gray level images into a preset compressed sensing learning model, a preset subspace learning model and a preset attention mechanics learning model for training to obtain the trained compressed sensing learning model, the trained subspace learning model and the trained attention mechanics learning model.
In one embodiment, the step of fusing the at least two corresponding preprocessed images to obtain a fused image includes:
respectively obtaining the weight coefficients of the training compressed sensing learning model, the training subspace learning model and the training attention learning model;
and adding and fusing the corresponding preprocessed images according to the weight coefficients to obtain the fused image.
In one embodiment, the step of extracting the object to be detected from the fused image includes:
inputting the fused image into a training implicit vector learning model to obtain a reprocessed image;
and extracting the target to be detected from the reprocessed image.
In one embodiment, before the step of extracting the object to be detected from the fused image, the method further includes:
acquiring a plurality of sample preprocessing images;
and inputting the plurality of sample preprocessing images into a preset hidden vector learning model to obtain the training hidden vector learning model.
An infrared target detection apparatus, the apparatus comprising:
the to-be-detected image acquisition module is used for acquiring a to-be-detected gray image;
the preprocessing module is used for respectively inputting the gray level image to be detected into training learning models of at least two visual angles to obtain at least two corresponding preprocessed images;
the fusion module is used for fusing the at least two corresponding preprocessed images to obtain a fused image;
and the target extraction module is used for extracting the target to be detected from the fusion image.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor when executing the computer program comprising the steps of:
acquiring a gray image to be detected;
inputting the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images;
fusing the at least two corresponding preprocessed images to obtain a fused image;
and extracting the target to be detected from the fused image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a gray image to be detected;
inputting the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images;
fusing the at least two corresponding preprocessed images to obtain a fused image;
and extracting the target to be detected from the fused image.
The invention provides an infrared target detection method, which comprises the steps of obtaining a gray image to be detected; inputting the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images; fusing the at least two corresponding preprocessed images to obtain a fused image; and extracting the target to be detected from the fused image. By adopting training learning models with different visual angles, various interference targets in the gray level image to be detected are removed, and the detection accuracy rate of the long-distance weak and small targets can be effectively improved.
Drawings
FIG. 1 is a diagram of an exemplary environment in which an infrared target detection method may be implemented;
FIG. 2 is a schematic flow chart of a method for infrared target detection in one embodiment;
FIG. 3 is a schematic flow chart of a method for infrared target detection in another embodiment;
FIG. 4 is a block diagram showing the structure of an infrared target detection apparatus in another embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The infrared target detection method provided by the application can be applied to the application environment shown in fig. 1. The application environment comprises a server 104 and an infrared camera device 102, wherein the server 104 acquires a gray image to be detected from the infrared camera device 102; the server 104 inputs the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images; the server 104 fuses the at least two corresponding preprocessed images to obtain a fused image; the server 104 extracts the target to be detected from the fused image. The server can be implemented by an independent server or a server cluster consisting of a plurality of servers; the camera device can be realized by a camera, a mobile phone and other devices with camera functions.
In one embodiment, as shown in fig. 2, an infrared target detection method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step S202, acquiring a gray image to be detected.
In the step, the gray image to be detected is obtained through the infrared shooting equipment, and when the detection distance is long and the environment is complex, the target to be detected can be interfered by complex and high-level unknown type noise.
And S204, respectively inputting the gray images to be detected into training learning models of at least two visual angles to obtain at least two corresponding preprocessed images.
In one embodiment, the learning models of the at least two viewing angles include a training compressed sensing learning model, a training subspace learning model and a training attention learning model, and because the position of the object in the corresponding reference coordinate system does not change according to the viewing angle of the observer, the gray-scale image to be detected can be processed by adopting the learning models of the multiple different viewing angles.
In an embodiment, before the step of inputting the grayscale image to be detected into the training learning models of at least two viewing angles respectively to obtain at least two corresponding preprocessed images in step S204, the method further includes: and acquiring the training compressed sensing learning model, the training subspace learning model and the training attention learning model.
Specifically, the input single frame image mainly consists of a background, an object and noise, and can be mathematically expressed as:
F(x,y,t)=B(x,y,t)+T(x,y,t)+N(x,y,t) (1)
wherein:
f (x, y, t) -output image of detector at time t
Background signal at time B (x, y, t) -t
Target signal at time T (x, y, T) -T
Observed noise at time N (x, y, t) -t
x, y-space coordinates, Cartesian coordinate system
t-time component, corresponding to video frame number
Since the object is a point object with a small pixel area, the change of the video background is slow. Therefore, the target information is a very sparse vector in the current spatial dimension, and the background can be expressed by a low-dimensional signal, which can be expressed as:
B=AZ (2)
||T||=k (3)
wherein:
a-dictionary of background B for representing subspace-transformed linear tensors thereof
k-sparse coefficient for representing the degree of sparsity of target signal T
For dense signals, it may be generally required to perform domain transformation (e.g., fourier domain, wavelet domain, etc.) by finding a suitable dictionary D to obtain a sparse tensor of its transform domain, and the transformation process may be mathematically expressed as:
T=DS (4)
wherein:
d-transform domain dictionary for space-time transformation of target signal
Digital signal characterization in the S-transform domain
Because the detection distance is far, the target has only a few pixels in the detection surface or the field of view of the detection system, and the target is a very sparse vector in space relative to the whole field of view, the method that can adopt compressed sensing and subspace learning is very suitable for processing the signal of the type, and the processing can be expressed as:
L=argmin||Z||*+λ||S||s.t F=AZ+DS+N (4)
wherein:
lambda-regularization coefficient for constraining the loss equation
Up to this point, the process of background signal suppression, target enhancement and target extraction is converted to solving equations with equality constraints.
In an embodiment, the dictionaries a and D may be preset as quantitative, or a large amount of infrared image data may be used for supervised learning to train out the dictionaries a and D with optimal expression capability, so as to achieve the purpose of effectively extracting the target.
Correspondingly, the training compressed sensing learning model, the training subspace learning model and the training attention learning model can be preset or obtained by training by adopting a large number of gray level images.
In one implementation, the step of obtaining the training compressed sensing learning model, the training subspace learning model, and the training attention learning model includes:
(1) acquiring a plurality of sample gray level images;
(2) and respectively inputting the plurality of sample gray level images into a preset compressed sensing learning model, a preset subspace learning model and a preset attention mechanics learning model for training to obtain the trained compressed sensing learning model, the trained subspace learning model and the trained attention mechanics learning model.
And S206, fusing the at least two corresponding preprocessed images to obtain a fused image.
In this step, the preprocessed images may be fused by an addition method, or a plurality of preprocessed images may be fused by a multiplication method.
In the specific implementation process, the three training models can be added and fused in the same weight mode, or the preset compressed sensing learning model, the preset subspace learning model and the preset attention learning model can be trained, and the weights of different learning models are set during training.
In one embodiment, the step S206 of fusing the at least two corresponding preprocessed images to obtain a fused image includes:
(1) respectively obtaining the weight coefficients of the training compressed sensing learning model, the training subspace learning model and the training attention learning model;
(2) and adding and fusing the corresponding preprocessed images according to the weight coefficients to obtain the fused image.
For example, when training, the processing effect of the compressed sensing learning model on the sample gray level image is found to be significant, and the weight of the compressed sensing learning model can be set to be higher than that of the other two learning models.
And S208, extracting the target to be detected from the fused image.
In one embodiment, the target to be detected can be directly extracted from the fused image, but infrared false targets with high similarity to the true target, such as infrared interference and the like, may exist in a part of the scene; because the detector of the infrared detection system is of a plane structure and has no three-dimensional sensing capability, true and false targets with high feature similarity can not be directly and effectively separated from the current model and space. At this time, it is necessary to separate the false object having a high similarity with the true object from the actual true object.
As shown in fig. 3, in one embodiment, the extracting the object to be detected from the fused image in step S208 includes:
step S810, inputting the fusion image into a training hidden vector learning model to obtain a reprocessed image;
and step S820, extracting the target to be detected from the reprocessed image.
In the specific implementation process, a hidden vector learning mode is introduced, and all suspicious target data obtained in the previous steps are further learned. Through model learning, the real factors with high similarity of true and false targets of the current dimension are deeply excavated, for example, the fever of a patient is only in a macroscopic phenomenon, but what really causes the fever of the patient is a certain virus, and the process can be expressed in a mathematical way as follows:
Pmode(x)=Eh Pmode(x|h) (5)
wherein:
x-data of current dimension
h-hidden vector
It can be seen that the hidden vector representation of the target is obtained through model learning, and the learning method is unsupervised learning; the method provides another effective data mode for eliminating false targets with high similarity, improves the significance of real targets, achieves the purposes of classification, positioning and multi-target association of true/false targets, and completes intelligent detection and tracking of the targets.
In one embodiment, before the step of extracting the object to be detected from the fused image in step S208, the method further includes:
(1) acquiring a plurality of sample preprocessing images;
(2) and inputting the plurality of sample preprocessing images into a preset hidden vector learning model to obtain the training hidden vector learning model.
In order to better understand the above method, an application example of the live list typesetting layout method of the present invention is described in detail as follows:
1) the server receives a gray image to be detected sent by the infrared detection terminal;
2) respectively inputting the gray level image to be detected into a training compressed sensing learning model, a training subspace learning model and a training attention learning model by the server to obtain three corresponding preprocessed images;
3) adding and fusing the three corresponding preprocessed images according to preset weight coefficients of the three models to obtain a fused image;
4) inputting the fused image into a training hidden vector learning model to obtain a reprocessed image;
5) and extracting the target to be detected from the reprocessed image.
According to the infrared target detection method, the gray level image to be detected is obtained; inputting the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images; fusing the at least two corresponding preprocessed images to obtain a fused image; and extracting the target to be detected from the fused image. By adopting training learning models with different visual angles, various interference targets in the gray level image to be detected are removed, and the detection accuracy rate of the long-distance weak and small targets can be effectively improved.
As shown in fig. 4, fig. 4 is a schematic structural diagram of an infrared target detection device in an embodiment, in this embodiment, an infrared target detection device is provided, which includes an image acquisition module 401 to be detected, a preprocessing module 402, a fusion module 403, and a target extraction module 404, where:
an image to be detected acquisition module 401, configured to acquire a gray image to be detected;
the preprocessing module 402 is configured to input the grayscale image to be detected into training learning models of at least two viewing angles, respectively, to obtain at least two corresponding preprocessed images;
a fusion module 403, configured to fuse the at least two corresponding preprocessed images to obtain a fused image;
and an object extracting module 404, configured to extract an object to be detected from the fused image.
For specific limitations of the infrared target detection device, see the above limitations on the infrared target detection method, which are not described herein again. All or part of each module in the infrared target detection device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
As shown in fig. 5, fig. 5 is a schematic diagram of an internal structure of a computer device in one embodiment. The computer apparatus includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a device bus. The non-volatile storage medium of the computer device stores an operating device, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions can enable the processor to realize the infrared target detection method when being executed by the processor. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform a method of infrared target detection. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, the computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring a gray image to be detected; inputting the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images; fusing the at least two corresponding preprocessed images to obtain a fused image; and extracting the target to be detected from the fused image.
In one embodiment, the learning models for the at least two perspectives when the computer program is executed by the processor include a training compressed perception learning model, a training subspace learning model, and a training attention learning model.
In one embodiment, before the step of inputting the grayscale image to be detected into the training learning models of at least two viewing angles respectively to obtain at least two corresponding preprocessed images when the processor executes the computer program, the method further includes: and acquiring the training compressed sensing learning model, the training subspace learning model and the training attention learning model.
In one embodiment, the step of obtaining the training compressed sensing learning model, the training subspace learning model and the training attention learning model when the processor executes the computer program comprises: acquiring a plurality of sample gray level images; and respectively inputting the plurality of sample gray level images into a preset compressed sensing learning model, a preset subspace learning model and a preset attention mechanics learning model for training to obtain the trained compressed sensing learning model, the trained subspace learning model and the trained attention mechanics learning model.
In one embodiment, the step of fusing the at least two corresponding pre-processed images to obtain a fused image when the processor executes the computer program includes: respectively obtaining the weight coefficients of the training compressed sensing learning model, the training subspace learning model and the training attention learning model; and adding and fusing the corresponding preprocessed images according to the weight coefficients to obtain the fused image.
In one embodiment, the step of extracting the object to be detected from the fused image when the processor executes the computer program includes: inputting the fused image into a training implicit vector learning model to obtain a reprocessed image; and extracting the target to be detected from the reprocessed image.
In one embodiment, before the step of extracting the object to be detected from the fused image when the processor executes the computer program, the method further includes: acquiring a plurality of sample preprocessing images; and inputting the plurality of sample preprocessing images into a preset hidden vector learning model to obtain the training hidden vector learning model.
In one embodiment, a storage medium is provided that stores computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of: acquiring a gray image to be detected; inputting the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images; fusing the at least two corresponding preprocessed images to obtain a fused image; and extracting the target to be detected from the fused image.
In one embodiment, the computer readable instructions, when executed by the processor, include training a compressed sensing learning model, training a subspace learning model, and training an attention learning model.
In one embodiment, before the step of inputting the grayscale image to be detected into the training learning models of at least two viewing angles respectively to obtain at least two corresponding preprocessed images, the computer readable instructions further include: and acquiring the training compressed sensing learning model, the training subspace learning model and the training attention learning model.
In one embodiment, the step of obtaining the training compressed sensing learning model, the training subspace learning model and the training attention learning model when the computer readable instructions are executed by the processor comprises: acquiring a plurality of sample gray level images; and respectively inputting the plurality of sample gray level images into a preset compressed sensing learning model, a preset subspace learning model and a preset attention mechanics learning model for training to obtain the trained compressed sensing learning model, the trained subspace learning model and the trained attention mechanics learning model.
In one embodiment, the step of fusing the at least two corresponding pre-processed images to obtain a fused image when the computer readable instructions are executed by the processor comprises: respectively obtaining the weight coefficients of the training compressed sensing learning model, the training subspace learning model and the training attention learning model; and adding and fusing the corresponding preprocessed images according to the weight coefficients to obtain the fused image.
In one embodiment, the step of extracting the object to be detected from the fused image when the computer readable instructions are executed by the processor comprises: inputting the fused image into a training implicit vector learning model to obtain a reprocessed image; and extracting the target to be detected from the reprocessed image.
In one embodiment, before the step of extracting the object to be detected from the fused image, the computer readable instructions further include: acquiring a plurality of sample preprocessing images; and inputting the plurality of sample preprocessing images into a preset hidden vector learning model to obtain the training hidden vector learning model.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (7)

1. An infrared target detection method, comprising:
acquiring a gray image to be detected;
inputting the gray level image to be detected into training learning models of at least two visual angles respectively to obtain at least two corresponding preprocessed images;
fusing the at least two corresponding preprocessed images to obtain a fused image;
extracting a target to be detected from the fused image;
the learning models of the at least two visual angles comprise a training compressed sensing learning model, a training subspace learning model and a training attention learning model;
the step of fusing the at least two corresponding preprocessed images to obtain a fused image comprises:
respectively obtaining the weight coefficients of the training compressed sensing learning model, the training subspace learning model and the training attention learning model;
adding and fusing the corresponding preprocessed images according to the weight coefficients to obtain fused images;
the step of extracting the target to be detected from the fused image comprises the following steps:
inputting the fused image into a training implicit vector learning model to obtain a reprocessed image;
and extracting the target to be detected from the reprocessed image.
2. The method according to claim 1, wherein before the step of inputting the grayscale images to be detected into the training learning models of at least two viewing angles respectively to obtain at least two corresponding preprocessed images, the method further comprises:
and acquiring the training compressed sensing learning model, the training subspace learning model and the training attention learning model.
3. The method of claim 2, wherein the step of obtaining the training compressed sensing learning model, the training subspace learning model, and the training attention learning model comprises:
acquiring a plurality of sample gray level images;
and respectively inputting the plurality of sample gray level images into a preset compressed sensing learning model, a preset subspace learning model and a preset attention mechanics learning model for training to obtain the trained compressed sensing learning model, the trained subspace learning model and the trained attention mechanics learning model.
4. The method according to claim 1, wherein the step of extracting the object to be detected from the fused image is preceded by the steps of:
acquiring a plurality of sample preprocessing images;
and inputting the plurality of sample preprocessing images into a preset hidden vector learning model to obtain the training hidden vector learning model.
5. An infrared target detection apparatus, comprising:
the to-be-detected image acquisition module is used for acquiring a to-be-detected gray image;
the preprocessing module is used for respectively inputting the gray level image to be detected into training learning models of at least two visual angles to obtain at least two corresponding preprocessed images;
the fusion module is used for fusing the at least two corresponding preprocessed images to obtain a fused image;
the target extraction module is used for extracting a target to be detected from the fused image;
the learning models of the at least two visual angles comprise a training compressed sensing learning model, a training subspace learning model and a training attention learning model;
the fusing the at least two corresponding preprocessed images to obtain a fused image includes:
respectively obtaining the weight coefficients of the training compressed sensing learning model, the training subspace learning model and the training attention learning model;
adding and fusing the corresponding preprocessed images according to the weight coefficients to obtain fused images;
the extracting of the target to be detected from the fused image comprises:
inputting the fused image into a training implicit vector learning model to obtain a reprocessed image;
and extracting the target to be detected from the reprocessed image.
6. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 4 when executing the computer program.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
CN201910296542.0A 2019-04-13 2019-04-13 Infrared target detection method and device, computer equipment and storage medium Active CN110021036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910296542.0A CN110021036B (en) 2019-04-13 2019-04-13 Infrared target detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910296542.0A CN110021036B (en) 2019-04-13 2019-04-13 Infrared target detection method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110021036A CN110021036A (en) 2019-07-16
CN110021036B true CN110021036B (en) 2021-03-16

Family

ID=67191287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910296542.0A Active CN110021036B (en) 2019-04-13 2019-04-13 Infrared target detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110021036B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114118114A (en) * 2020-08-26 2022-03-01 顺丰科技有限公司 Image detection method, device and storage medium thereof
CN113537253B (en) * 2021-08-23 2024-01-23 北京环境特性研究所 Infrared image target detection method, device, computing equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984936A (en) * 2014-05-29 2014-08-13 中国航空无线电电子研究所 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition
CN106022231A (en) * 2016-05-11 2016-10-12 浙江理工大学 Multi-feature-fusion-based technical method for rapid detection of pedestrian
CN106339702A (en) * 2016-11-03 2017-01-18 北京星宇联合投资管理有限公司 Multi-feature fusion based face identification method
CN107609595A (en) * 2017-09-19 2018-01-19 长沙理工大学 A kind of line clipping image detecting method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984936A (en) * 2014-05-29 2014-08-13 中国航空无线电电子研究所 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition
CN106022231A (en) * 2016-05-11 2016-10-12 浙江理工大学 Multi-feature-fusion-based technical method for rapid detection of pedestrian
CN106339702A (en) * 2016-11-03 2017-01-18 北京星宇联合投资管理有限公司 Multi-feature fusion based face identification method
CN107609595A (en) * 2017-09-19 2018-01-19 长沙理工大学 A kind of line clipping image detecting method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于小样本学习的目标匹配研究;柳青林;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190215;第23-25页 *

Also Published As

Publication number Publication date
CN110021036A (en) 2019-07-16

Similar Documents

Publication Publication Date Title
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
Xie et al. Multilevel cloud detection in remote sensing images based on deep learning
US11443445B2 (en) Method and apparatus for depth estimation of monocular image, and storage medium
US11195038B2 (en) Device and a method for extracting dynamic information on a scene using a convolutional neural network
Rakibe et al. Background subtraction algorithm based human motion detection
WO2018166438A1 (en) Image processing method and device and electronic device
JP2020520512A (en) Vehicle appearance feature identification and vehicle search method, device, storage medium, electronic device
US10554957B2 (en) Learning-based matching for active stereo systems
CN110298281B (en) Video structuring method and device, electronic equipment and storage medium
Heo et al. Appearance and motion based deep learning architecture for moving object detection in moving camera
CN110135344B (en) Infrared dim target detection method based on weighted fixed rank representation
CN111553247B (en) Video structuring system, method and medium based on improved backbone network
CN110827262B (en) Weak and small target detection method based on continuous limited frame infrared image
CN110021036B (en) Infrared target detection method and device, computer equipment and storage medium
CN112036381B (en) Visual tracking method, video monitoring method and terminal equipment
KR20210099450A (en) Far away small drone detection method Using Deep Learning
Lin et al. SAN: Scale-aware network for semantic segmentation of high-resolution aerial images
Venu Object Detection in Motion Estimation and Tracking analysis for IoT devices
CN110458177B (en) Method for acquiring image depth information, image processing device and storage medium
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
Huang et al. Methods on Visual Positioning Based on Basketball Shooting Direction Standardisation
Brosch et al. Automatic target recognition on high resolution sar images with deep learning domain adaptation
KR101326644B1 (en) Full-body joint image tracking method using evolutionary exemplar-based particle filter
Patel et al. A Novel Approach for Detection and Tracking of Vessels on Maritime Sequences
CN117292120B (en) Light-weight visible light insulator target detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant