CN110826554B - Infrared target detection method - Google Patents

Infrared target detection method Download PDF

Info

Publication number
CN110826554B
CN110826554B CN201810906312.7A CN201810906312A CN110826554B CN 110826554 B CN110826554 B CN 110826554B CN 201810906312 A CN201810906312 A CN 201810906312A CN 110826554 B CN110826554 B CN 110826554B
Authority
CN
China
Prior art keywords
matrix
candidate
windows
target
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810906312.7A
Other languages
Chinese (zh)
Other versions
CN110826554A (en
Inventor
吴鑫
谢建
张建奇
黄曦
刘德连
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201810906312.7A priority Critical patent/CN110826554B/en
Publication of CN110826554A publication Critical patent/CN110826554A/en
Application granted granted Critical
Publication of CN110826554B publication Critical patent/CN110826554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention provides an infrared target detection method, which comprises the following steps of compressing an input infrared image, and roughly screening image sliding windows by using windows with different sizes; quantizing the continuous data into binary codes by using a locality sensitive hashing and iterative quantization method; outputting an optimal rotation matrix and an optimal binary coding matrix; calculating the Hamming distance between the candidate window and the target template to obtain a first round of candidate windows; mapping the first round of candidate windows to the original infrared image for fine screening; and outputting a window with the Hamming distance smaller than the threshold value as the position of the target. The technical scheme solves the technical problems of low detection probability, low speed and poor adaptability to complex scenes in the existing infrared target detection algorithm.

Description

Infrared target detection method
Technical Field
The invention belongs to the field of infrared target detection, relates to an infrared target detection method, and particularly relates to a multi-scale infrared target detection method based on iterative quantization-locality sensitive hashing.
Background
The infrared target detection technology plays an important role in searching and tracking targets in complex scenes. In a large-view-field infrared scene, an infrared image has a complex background and contains various interference factors, and false alarms and false alarm omission are easily generated. When the background contains multiple objects of different sizes and orientations, it is a challenging task to accurately detect all objects from a complex scene.
In the existing target detection algorithm, a candidate region-based method is used in an R-CNN (convolutional neural network), so that the number of candidate windows is reduced, but region merging needs to be performed through continuous iterative computation, and the computation complexity is high. Target detection algorithms based on SSD (single-shot multi-frame detector) and YOLO (single-network full-width detector) perform well when the target is large, but have poor detection when the target is small. On the other hand, when detecting candidate regions from an image, the conventionally employed sliding window method generates too large amount of data, the conventional method is slow in calculating feature vectors, it is necessary to calculate feature vectors for each candidate region separately, and it takes a lot of time when there are many candidate regions.
In summary, in the prior art, the amount of data generated by the sliding window is too large, which results in slow speed and large time consumption in calculating the feature vector.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a multi-scale infrared target detection algorithm based on iterative quantization-locality sensitive hashing, and is used for solving the technical problems of low detection accuracy and low speed in the conventional multi-scale infrared target detection algorithm.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:
step 1, compressing an input infrared image, and performing window sliding on the compressed infrared image by using windows with different sizes to obtain a plurality of candidate windows;
step 2, generating a data matrix X according to the candidate window and the target template;
step 3, converting the data matrix X into a binary coding matrix B;
and 4, calculating the Hamming distance between the candidate window and the target template according to the binary coding matrix B, outputting the candidate window with the Hamming distance smaller than a threshold value T, and detecting the position of the infrared target.
In an embodiment of the present invention, the step 3 specifically includes:
step 31, initializing a hyperplane mapping matrix W, mapping a data matrix X through the hyperplane mapping matrix W to reduce the dimensionality of the data matrix X to a specified dimensionality, setting a rotation matrix R, and initializing the rotation matrix R;
and 32, rotating the mapped data matrix by using the rotation matrix R, and optimizing by using an iterative quantization method to obtain a binary coding matrix B.
In an embodiment of the present invention, the step 4 specifically includes:
step 41, calculating the Hamming distance between the candidate window and the target template according to the binary coding matrix B to obtain a first round of candidate windows;
step 42, mapping the first round of candidate windows into the infrared image to obtain a new binary coding matrix D;
and 43, outputting a candidate window with the Hamming distance smaller than the threshold value according to the new binary coding matrix D, and detecting the position of the infrared target.
In an embodiment of the present invention, the step 1 specifically includes:
step 11, compressing the infrared image;
and step 12, setting moving step length and windows with different sizes, and performing window sliding on the compressed infrared image to obtain a plurality of candidate windows.
In an embodiment of the present invention, the step 2 specifically includes:
step 21, reading a plurality of target templates in a target template library, and compressing a plurality of candidate windows and a plurality of target templates to the same size;
and step 22, generating a data matrix X according to the compressed candidate window and the target template, wherein the row number of the data matrix X is the sum of the number of the candidate window and the target template, and the column number of the data matrix X is the number of pixel points of the candidate window or the target template.
In one embodiment of the present invention, the step 31 comprises:
311, initializing a hyperplane mapping matrix W into the first c eigenvectors of X' X, wherein c is the bit number of binary coding, and performing dimensionality reduction on the data matrix X by using a locality sensitive hash function to obtain a mapped data matrix XW;
step 312, randomly generating a c × c matrix, and performing singular value decomposition on the c × c matrix to obtain an orthogonal matrix as an initial value of the rotation matrix R.
In one embodiment of the present invention, the step 32 comprises:
step 321, setting the projection matrix V as XWR, when V is i,j When not less than 0, B i,j 1, otherwise, B i,j Generating a binary coded matrix B, where i denotes a row of the matrix and j denotes a column of the matrix, where R' is the transpose of the rotation matrix R;
step 322, setting the transition matrix C to XWB ', and performing C' C and CCSingular value decomposition so that C' C ═ U 1 ∑U’ 1 ,CC’=U 2 ∑U’ 2 Let the rotation matrix R equal to U 1 U 2 B 'is the transposition of the binary coding matrix B, and C' is the transposition of the transition matrix C;
and 323, iterating the rotation matrix R in the step 322 into the step 321, iterating for N times, wherein N is greater than 1.
In an embodiment of the present invention, the step 41 specifically includes:
and calculating the Hamming distance between each candidate window and the target template according to the binary coding matrix B, and screening 10 candidate windows closest to each target template to serve as first round candidate windows.
In one embodiment of the present invention, the step 42 includes:
step 421, mapping the first round of candidate windows to the original infrared image to obtain uncompressed candidate windows serving as second round of candidate windows;
and 422, executing the steps 1 to 3 to the second round candidate window to obtain a new binary coding matrix D.
In an embodiment of the present invention, the step 43 specifically includes:
calculating the Hamming distance between the second round candidate window and the target template according to the new binary coding matrix D;
and outputting the candidate window with the Hamming distance smaller than the threshold value T.
Compared with the prior art, the invention has the following characteristics:
the technical scheme of the invention adopts a local sensitive Hash method, so that original similar data are still similar after dimensionality reduction while dimensionality reduction is carried out, and dissimilar data are still dissimilar after dimensionality reduction; in this way, the candidate windows are mapped to the vertexes of the binary hyperplane to obtain binary codes, and the feature vectors can be uniformly calculated for a plurality of candidate windows by adopting one-time operation and simultaneous extraction; and the Hamming distance is used for measurement, and the XOR operation of an internal operator of the computer can be used when the Hamming distance is calculated, so that the similarity calculation of a plurality of candidate windows and the template library can be completed within microsecond order.
Drawings
FIG. 1 is a first flow chart of the present embodiment;
fig. 2 is a block diagram of the flow of the present technical solution;
fig. 3 is an exemplary diagram of results of detecting different infrared targets according to the present technical solution;
FIG. 4 is a graph showing the relationship between the target size and the detection probability under the compression sizes with different sizes.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific examples:
example one
Referring to the attached figure 1, the multi-scale infrared target detection method based on the iterative quantization-locality sensitive hashing comprises the following steps:
step 1, compressing an input infrared image, and performing window sliding on the compressed image by using windows with different sizes to obtain a plurality of candidate windows;
step 2, generating a data matrix X according to the candidate window and the target template;
specifically, when detecting and identifying the infrared target, the number of candidate windows can be reduced by compressing the image under the condition of ensuring correct detection and identification of the target, thereby shortening the detection time. Since the size and shape of the object to be detected in the infrared image are different, windows of different sizes should be adopted to slide the infrared image to obtain candidate windows.
Setting windows with different sizes, and then sliding the windows on the compressed infrared image in sequence according to the set moving step length, thereby obtaining a large number of candidate windows with the same size as the set windows. It should be noted that the sizes of candidate windows obtained by sliding the same window are the same, and the sizes of candidate windows obtained by different windows are different, so the sizes of multiple candidate windows are not completely the same.
A plurality of groups of templates of known targets are stored in the target template library, and all target templates in the target template library are read because the types and postures of the targets to be detected and identified are unknown;
the size of the image represented by the candidate windows and the size of the image represented by the target template are scaled to the same size through a bilinear interpolation method, so that the number of pixel points of all the candidate windows and the target template is the same, and the candidate windows and the target template have comparability when the hamming distance is used for carrying out similarity measurement on the candidate windows and the target template in the follow-up process.
When the size of the candidate window is the same as that of the target template, each candidate window or the target template has the same number of pixel points, and each pixel point can be represented by a pixel value. Therefore, in this embodiment, the number of pixels of the candidate window and the target template is the same, and a data matrix X is constructed by the multiple candidate windows and the multiple target templates, where a row of the data matrix X is formed by the candidate window and the target template, the row of the data matrix X is a sum of the number of the multiple candidate windows and the number of the multiple target templates, a column of the data matrix X is a pixel value of a pixel point of the corresponding candidate window or the target template, a column of the data matrix X is the number of the pixel points of the candidate window or the target template, and the number of the pixel points is determined by a size of the image, so that it is ensured that the number of the pixel points of the candidate window and the target template is the same.
Step 3, converting the data matrix X into a binary coding matrix B;
step 31, initializing a hyperplane mapping matrix W, mapping a data matrix X through the hyperplane mapping matrix W to reduce the dimensionality of the data matrix X to a specified dimensionality, setting a rotation matrix R, and initializing the rotation matrix R;
and 32, rotating the mapped data matrix by using the rotation matrix R, and optimizing by using an iterative quantization method to obtain a binary coding matrix B.
In particular, a locality sensitive hash functionThe method has the function of keeping the similarity of the data in the original space when the data are mapped from the high dimension to the low dimension, and the original similar data are still similar after the dimension reduction while the dimension reduction is carried out, and the dissimilar data are still dissimilar after the dimension reduction; by mapping the data matrix to the vertices of the hyperplane, each data point having a mapping value of 1 or-1, depending on the space in which the data point is located that is partitioned by the hyperplane, the data matrix consisting of pixel values can be converted to binary coded values. This process may be implemented using a locality sensitive hash function
Figure BDA0001760701720000061
When b is set to the negative value of the average value of each element of the projected data xw, the mapping function becomes h (x) sgn (xw). The transformation of all data points is denoted B ═ sgn (xw) in the form of a matrix,
Figure BDA0001760701720000062
the matrix is composed of a series of hyperplane coefficient vectors w k The code is composed by columns, wherein k is 1,2, …, c, c is binary coded digit. The hyperplane mapping matrix W is initialized to the first c eigenvectors of X' X.
In the process of quantizing the mapped data matrix to generate binary codes, data is changed from continuous to discrete, which inevitably causes certain quantization errors. In order to minimize the quantization error, an orthogonal matrix is used as a rotation matrix R to rotate the reduced data matrix XW, so that the quantization error in the quantization process can be reduced. The quantization error can be expressed as
Figure BDA0001760701720000071
Wherein | · | purple F Representing the Frobenius norm. Specifically, a c × c matrix is randomly generated, and singular value decomposition is performed on the c × c matrix to obtain an orthogonal matrix as an initial value of the rotation matrix R.
Let projection matrix V be XWR; wherein X is a data matrix composed of pixel values of candidate windows or target templates; w is a hyperplane mapping matrix for high-dimensional mapping of the data matrix X toAnd the similarity of the data in the original space is kept in the low-dimensional time, and R' is the transposition of a rotation matrix R and is used for reducing quantization errors. When V is i,j When not less than 0, B i,j 1, otherwise, B i,j 0. The data matrix X can thus be converted into a binary code matrix, where V i,j And B i,j The ith row and the jth column of the projection matrix V and the binary coding matrix B respectively can obtain the binary coding matrix B in the iterative process.
And 4, calculating the Hamming distance between the candidate window and the target template according to the binary coding matrix B, and outputting the candidate window with the Hamming distance smaller than a threshold value T.
The hamming distance refers to the number of different codes of each bit at the same position in two binary codes with the same length, and can measure the similarity of the two binary codes, and the smaller the distance, the more similar the candidate window and the target template, and the more likely the candidate window and the target template are at the position of the target. The binary coding matrix B comprises binary codes of the candidate windows and the target template, the Hamming distance between each candidate window and the binary code of the target template is calculated, and the candidate windows with the Hamming distances smaller than the threshold value T are screened out.
In the embodiment, the compressed infrared image is adopted for window sliding, so that the number of candidate windows is greatly reduced, and the target detection efficiency is improved.
Example two
On the basis of the first embodiment, in order to reduce quantization errors occurring in the process of converting the data matrix X into the binary coding matrix B and improve the target detection hit rate, the present embodiment adopts an iterative quantization method to obtain the optimal rotation matrix R and correspondingly calculate the optimal binary coding matrix B, and the process is as follows:
on the basis of the binary coding matrix B in the first embodiment, the transition matrix C is set to XWB ', and singular value decomposition is performed on C' C and CC 'so that C' C is set to U 1 ΣU’ 1 ,CC’=U 2 ΣU’ 2 Let the rotation matrix R equal to U 1 U 2 Wherein B 'is the transpose of the binary coding matrix B, and C' is the transpose of the transition matrix C(ii) a Obtaining a new rotation matrix R by transposing the binary coding matrix B and then bringing the transposed binary coding matrix B into the transition matrix C, that is, obtaining a new rotation matrix from the initial rotation matrix R in step 312 after one iteration, and further obtaining a new rotation matrix R in step 322 New In the iteration step 321, a new binary coding matrix can be obtained, a new rotation matrix can be obtained again in the step 322 by using the new binary coding matrix, and so on, the obtained rotation matrix can make the quantization error smaller when each iteration is performed, in this embodiment, the more the iteration times are, the better the operation efficiency is, and the rotation matrix obtained when the iteration times N are 50 times can be considered as the optimal solution. Accordingly, a binary coding matrix B is obtained.
And calculating the Hamming distance between the candidate window and the target template according to the binary coding matrix B obtained after iteration. For a specific implementation process, refer to embodiment one, and are not described herein again.
EXAMPLE III
In the second embodiment, in order to increase the speed of target detection, the input infrared image is compressed, so that some information is lost in the image, and therefore, more accurate fine screening is required. The embodiment is further improved on the basis of the second embodiment, and the details are as follows with reference to fig. 2:
and calculating the Hamming distance between each candidate window and the target template according to the binary coding matrix B, and screening 10 candidate windows closest to each target template to serve as first round candidate windows.
The hamming distance refers to the number of different codes of each bit at the same position in two binary codes with the same length, and can measure the similarity of the two binary codes, and the smaller the distance, the more similar the candidate window and the target template, and the more likely the candidate window and the target template are at the position of the target. The binary coding matrix B comprises binary codes of the candidate windows and the target template, the Hamming distance between each candidate window and the binary code of the target template is calculated, and 10 candidate windows closest to each template are screened out to be used as first round candidate windows.
Mapping the first round of candidate windows to the original infrared image to obtain uncompressed candidate windows serving as second round of candidate windows;
and executing the steps 1 to 3 in fig. 1 on the second round candidate window to obtain a new binary coding matrix D.
Specifically, the first round of candidate windows are mapped to the original infrared image, so that a new candidate window, namely a second round of candidate windows, can be obtained from the original infrared image, and pictures of the second round of candidate windows are directly obtained from the original infrared image without image compression, so that the problem of data loss does not exist, and the accuracy is higher than that of the first round of candidate windows.
Further, step 1 to step 3 in fig. 1 are performed on the second round of candidate windows, the size of the second round of candidate windows is scaled to be the same as the size of the target template in the target template library, that is, the size of the second round of candidate windows has the same number of pixels, and then a new data matrix X is generated New Then to the new data matrix X New Processing is performed, and a new binary encoding matrix D is obtained by iterating N times, where N is preferably 50. The new binary coded matrix D is derived from the uncompressed infrared image with a higher precision than the binary coded matrix B.
Calculating the Hamming distance between the second round candidate window and the target template according to the new binary coding matrix D;
specifically, the new binary code matrix D includes binary codes of the second round of candidate windows and the target template, and a hamming distance between the binary codes of each second round of candidate windows and the target template is calculated, where a smaller hamming distance indicates a higher similarity between the two, and otherwise, the similarity is lower. And setting a Hamming distance threshold T according to experience, comparing the calculated Hamming distance with the threshold T, and if the calculated Hamming distance is smaller than the threshold T, indicating that the similarity between the candidate window and the target template is higher, wherein the position of the candidate window is the position of the target, and thus, the target detection is finished.
Example four
Referring to fig. 3(a1) to 3(d3), wherein fig. 3(a1) to 3(c3) show the detection results of the aircraft under a complex cloud background, and fig. 3(a2), 3(b2) and 3(c2) show the first round candidate windows screened out; fig. 3(d3) shows the detection results of three tanks with different sizes and angles in a large field of view. The target is successfully detected by compressing and sliding the input infrared image and then utilizing a method based on iterative quantization-local sensitive hash and Hamming distance measurement. According to the size of the target in the scene applied by the invention, the size of the compressed image can be adjusted, and the larger the scene target applied by the invention is, the smaller the compressible size is, and the higher the detection speed is.
Referring to fig. 4, the abscissa is the size of the object in the input image, determined by the larger value of the length and width of the object, and the ordinate is the detection probability, and the detection is determined to be successful when the area where the object is located occupies more than 70% of the framed area, otherwise, the detection fails. By detecting a large number of real shot and simulated infrared images with the image size of 640 × 480, curves of different marks represent the sizes of input image compression in the coarse detection stage, and are respectively 48 × 48, 64 × 64, 128 × 128 and 256 × 256. The experimental result shows that the larger the target size is, the larger the detection probability is, and the smaller the target size is, the smaller the detection probability is; the larger the compression size is, the larger the detection probability is, and the smaller the compression size is, the smaller the detection probability is; when the compression size and the target size exceed a certain range, the detection probability of the target in 200 infrared images exceeds 98%, and a good detection result is obtained.
In summary, the present invention provides an infrared target detection method by using specific examples, and the description of the above embodiments is only used to help understanding the scheme and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention, and the scope of the present invention should be defined by the appended claims.

Claims (8)

1. An infrared target detection method is characterized by comprising the following steps:
step 1, compressing an input infrared image, and performing window sliding on the compressed infrared image by using windows with different sizes to obtain a plurality of candidate windows;
step 2, generating a data matrix X according to the candidate window and the target template;
scaling the size of the image represented by the candidate windows and the size of the image represented by the target template to the same size by a bilinear interpolation method, wherein the number of pixel points of all the candidate windows and the target template is the same;
step 3, converting the data matrix X into a binary coding matrix B;
step 4, calculating the Hamming distance between the candidate window and the target template according to the binary coding matrix B, outputting the candidate window with the Hamming distance smaller than a threshold value T, and detecting the position of the infrared target;
the step 3 specifically comprises the following steps:
step 31, initializing a hyperplane mapping matrix W, mapping a data matrix X through the hyperplane mapping matrix W to reduce the dimensionality of the data matrix X to a specified dimensionality, setting a rotation matrix R, and initializing the rotation matrix R;
specifically, by mapping the data matrix to the vertex of the hyperplane, the mapping value of each data point is 1 or-1, depending on the space in which the data point is located, which is divided by the hyperplane, the data matrix composed of pixel values is converted into binary coded values, which are implemented by using a locality sensitive hash function:
Figure FDA0003603928820000011
b is set as the negative value of the average value of each element of the projected data xw;
step 32, rotating the mapped data matrix by using the rotation matrix R, and optimizing by using an iterative quantization method to obtain a binary coding matrix B;
the step 4 specifically comprises the following steps:
step 41, calculating the Hamming distance between the candidate window and the target template according to the binary coding matrix B to obtain a first round of candidate windows;
step 42, mapping the first round of candidate windows into the infrared image to obtain a new binary coding matrix D;
and 43, outputting a candidate window with the Hamming distance smaller than a threshold value according to the new binary coding matrix D, and detecting the position of the infrared target.
2. The infrared target detection method according to claim 1, wherein the step 1 specifically comprises:
step 11, compressing the infrared image;
and step 12, setting moving step length and windows with different sizes, and performing window sliding on the compressed infrared image to obtain a plurality of candidate windows.
3. The infrared target detection method according to claim 2, wherein the step 2 specifically is:
step 21, reading a plurality of target templates in a target template library, and compressing a plurality of candidate windows and a plurality of target templates to the same size;
and step 22, generating a data matrix X according to the compressed candidate window and the target template, wherein the line number of the data matrix X is the sum of the number of the candidate window and the target template, and the column number of the data matrix X is the number of pixel points of the candidate window or the target template.
4. The infrared target detection method of claim 1, characterized in that the step 31 comprises:
311, initializing a hyperplane mapping matrix W into the first c eigenvectors of X' X, wherein c is the bit number of binary coding, and performing dimensionality reduction on the data matrix X by using a locality sensitive hash function to obtain a mapped data matrix XW;
step 312, randomly generating a c × c matrix, and performing singular value decomposition on the c × c matrix to obtain an orthogonal matrix as an initial value of the rotation matrix R.
5. The infrared target detection method of claim 4, characterized in that the step 32 comprises:
step 321, setting the projection matrix V as XWR, when V is i,j When not less than 0, B i,j 1, otherwise, B i,j Generating a binary coding matrix B, wherein i represents a row of the matrix and j represents a column of the matrix;
step 322, setting the transition matrix C to XWB ', and performing singular value decomposition on C' C and CC 'so that C' C to U 1 ΣU’ 1 ,CC’=U 2 ΣU’ 2 Let the rotation matrix R equal to U 1 U 2 B 'is the transposition of the binary coding matrix B, and C' is the transposition of the transition matrix C;
step 323, iterating the rotation matrix R in step 322 into step 321, iterating N times, where N is greater than 1.
6. The infrared target detection method according to claim 1, wherein the step 41 specifically comprises:
and calculating the Hamming distance between each candidate window and the target template according to the binary coding matrix B, and screening 10 candidate windows closest to each target template to serve as first round candidate windows.
7. The infrared target detection method of claim 6, wherein the step 42 comprises:
step 421, mapping the first round of candidate windows to the original infrared image to obtain uncompressed candidate windows serving as second round of candidate windows;
and 422, executing the steps 1 to 3 to the second round candidate window to obtain a new binary coding matrix D.
8. The infrared target detection method according to claim 7, wherein the step 43 specifically is:
calculating the Hamming distance between the second round candidate window and the target template according to the new binary coding matrix D;
and outputting the candidate window with the Hamming distance smaller than the threshold value T.
CN201810906312.7A 2018-08-10 2018-08-10 Infrared target detection method Active CN110826554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810906312.7A CN110826554B (en) 2018-08-10 2018-08-10 Infrared target detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810906312.7A CN110826554B (en) 2018-08-10 2018-08-10 Infrared target detection method

Publications (2)

Publication Number Publication Date
CN110826554A CN110826554A (en) 2020-02-21
CN110826554B true CN110826554B (en) 2022-09-09

Family

ID=69541016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810906312.7A Active CN110826554B (en) 2018-08-10 2018-08-10 Infrared target detection method

Country Status (1)

Country Link
CN (1) CN110826554B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860637B (en) * 2020-07-17 2023-11-21 河南科技大学 Single-shot multi-frame infrared target detection method
CN112399182B (en) * 2020-10-13 2021-08-31 中南大学 Single-frame infrared image hybrid compression method and system
CN115333735B (en) * 2022-10-11 2023-03-14 浙江御安信息技术有限公司 Safe data transmission method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150003573A (en) * 2013-07-01 2015-01-09 한국전자통신연구원 Method and apparatus for extracting pattern of image
CN104317902B (en) * 2014-10-24 2017-07-28 西安电子科技大学 Image search method based on local holding iterative quantization Hash
CN105160295B (en) * 2015-07-14 2019-05-17 东北大学 A kind of rapidly and efficiently face retrieval method towards extensive face database
CN107784659A (en) * 2017-10-16 2018-03-09 华南理工大学 A kind of method for searching for the similar visible images of electrical equipment infrared image

Also Published As

Publication number Publication date
CN110826554A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN113012212B (en) Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN107239793B (en) Multi-quantization depth binary feature learning method and device
CN111079683A (en) Remote sensing image cloud and snow detection method based on convolutional neural network
CN110807473B (en) Target detection method, device and computer storage medium
CN108038435B (en) Feature extraction and target tracking method based on convolutional neural network
US8428397B1 (en) Systems and methods for large scale, high-dimensional searches
CN110826554B (en) Infrared target detection method
CN106845341B (en) Unlicensed vehicle identification method based on virtual number plate
CN108022254B (en) Feature point assistance-based space-time context target tracking method
CN108509925B (en) Pedestrian re-identification method based on visual bag-of-words model
CN107145841B (en) Low-rank sparse face recognition method and system based on matrix
CN105550641B (en) Age estimation method and system based on multi-scale linear differential texture features
CN110046660B (en) Product quantization method based on semi-supervised learning
CN110942057A (en) Container number identification method and device and computer equipment
Etezadifar et al. A new sample consensus based on sparse coding for improved matching of SIFT features on remote sensing images
CN110889865A (en) Video target tracking method based on local weighted sparse feature selection
CN110689049A (en) Visual classification method based on Riemann kernel dictionary learning algorithm
CN108694411B (en) Method for identifying similar images
KR101717377B1 (en) Device and method for head pose estimation
CN108549915B (en) Image hash code training model algorithm based on binary weight and classification learning method
CN113762278A (en) Asphalt pavement damage identification method based on target detection
CN107291813B (en) Example searching method based on semantic segmentation scene
CN108763265B (en) Image identification method based on block retrieval
CN111192302A (en) Feature matching method based on motion smoothness and RANSAC algorithm
CN115578778A (en) Human face image feature extraction method based on trace transformation and LBP (local binary pattern)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant