CN108052976B - Multiband image fusion identification method - Google Patents

Multiband image fusion identification method Download PDF

Info

Publication number
CN108052976B
CN108052976B CN201711332225.7A CN201711332225A CN108052976B CN 108052976 B CN108052976 B CN 108052976B CN 201711332225 A CN201711332225 A CN 201711332225A CN 108052976 B CN108052976 B CN 108052976B
Authority
CN
China
Prior art keywords
probability distribution
distribution function
wave infrared
infrared image
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711332225.7A
Other languages
Chinese (zh)
Other versions
CN108052976A (en
Inventor
李妍妍
田瑞娟
王长城
隋旭阳
杨亮
李亚南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China South Industries Group Automation Research Institute
Original Assignee
China South Industries Group Automation Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China South Industries Group Automation Research Institute filed Critical China South Industries Group Automation Research Institute
Priority to CN201711332225.7A priority Critical patent/CN108052976B/en
Publication of CN108052976A publication Critical patent/CN108052976A/en
Application granted granted Critical
Publication of CN108052976B publication Critical patent/CN108052976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multiband image fusion identification method, which fuses a medium wave infrared image and a long wave infrared image to supplement a training sample, and performs data processing on the basis of the supplemented training sample, so that classification division can be performed on a visible light image with abundant detail and color information and higher resolution and by using respective advantages of the medium wave infrared image and the long wave infrared image, and data with the maximum respective advantages are fused, thereby maximally reducing information loss. The invention firstly constructs 3 classifiers, and the fused image of the medium wave infrared image and the long wave infrared image is used as a supplementary training sample, and the fused image is a utilized characteristic early fusion technology, so that the original single training sample has the outstanding characteristics of respective advantages of the medium wave infrared image and the long wave infrared image, and the probability of respective advantage information of the medium wave infrared image and the long wave infrared image is increased.

Description

Multiband image fusion identification method
Technical Field
The invention belongs to the technical field of image computers, and particularly relates to a multiband image fusion identification method which is mainly used for working in a multiband imaging detection system.
Background
The vast majority of the current so-called multi-band image fusion recognition methods are based on dual-band images. Common combinations are: visible light and millimeter wave image fusion, visible light and infrared image fusion, infrared different wave band image fusion and the like. Such a so-called image fusion recognition method based on multiple bands cannot be referred to as a true image fusion recognition method based on multiple bands. The method for identifying the fused image based on two wave bands is directly applied to the fused identification of the images of three or more wave bands, and a plurality of problems can be caused.
Disclosure of Invention
The invention aims to solve the technical problem of providing a target attribute fusion identification method in a multiband imaging system. The multiband image fusion identification method provided by the invention has the advantages that the image data processing is not influenced by interference, and the imaging effect is better.
The invention is realized by the following technical scheme:
a multi-band image fusion identification method comprises the following steps:
a1: respectively carrying out sample training on training samples of the medium-wave infrared image, the long-wave infrared image and the visible light image, and respectively generating a medium-wave infrared image classifier, a long-wave infrared image classifier and a visible light image classifier in a one-to-one correspondence manner;
a2: performing feature fusion on the medium-wave infrared image and the long-wave infrared image, reconstructing the images to obtain a fusion image, and combining the fusion image and the training sample to form a fusion training sample;
a3: classifying the fusion training sample by adopting a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier, and obtaining the correct discrimination rate, the wrong discrimination rate and the rejection discrimination rate of each classifier to each class;
a4: obtaining a medium wave infrared image of a target video frame, a long wave infrared image of the target video frame and a visible light image of the target video frame in a medium wave infrared shooting mode, a long wave infrared shooting mode and a visible light shooting mode, respectively and correspondingly sending the medium wave infrared image, the long wave infrared image and the visible light image into a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier to classify and extract target characteristic information, obtaining classification results, and analyzing the corresponding classification results and the correct discrimination rate of the corresponding classifier to obtain a probability distribution function of each classifier;
a5: selecting probability distribution functions of any 2 classifiers to perform conflict judgment, and then obtaining a middle probability distribution function in a synthesis or selection mode;
a6: after collision judgment is carried out on the probability distribution functions of the intermediate probability distribution function and the rest classifiers, a final probability distribution function is obtained in a synthesis or selection mode;
a7: and outputting the identification result when the final probability distribution function is larger than the threshold value.
The specific process of A1 is as follows:
a11: respectively adopting a medium wave infrared camera, a long wave infrared camera and a visible light camera to correspondingly obtain a medium wave infrared image, a long wave infrared image and a visible light image,
a12: respectively preprocessing the medium wave infrared image, the long wave infrared image and the visible light image, extracting target characteristic information, detecting and tracking a target according to target characteristics, acquiring a target area to form a training sample, and generating a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier in a one-to-one correspondence mode after the training sample is subjected to sample training.
The specific process of A2 is as follows:
a21: after image registration and filtering denoising pretreatment are carried out on the medium wave infrared image and the long wave infrared image, target characteristic information is extracted, characteristic fusion is carried out, and a fusion image is obtained after image reconstruction;
a22: and combining the fused image and the training sample to form a fused training sample.
The specific process of A3 is as follows: loading a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier, respectively sending the fusion training sample into the medium wave infrared image classifier, the long wave infrared image classifier and the visible light image classifier, obtaining a medium wave classification result, a long wave classification result and a visible light classification result of the fusion training sample, and obtaining a correct discrimination rate, an incorrect discrimination rate and a rejection discrimination rate of each class of the medium wave infrared image classifier, the long wave infrared image classifier and the visible light image classifier, respectively, wherein the correct discrimination rate of each class is regarded as the incorrect discrimination rate of other classes.
The specific process of A4 is as follows:
a41: respectively adopting a medium wave infrared camera, a long wave infrared camera and a visible light camera to correspondingly obtain a medium wave infrared image of a target video frame, a long wave infrared image of the target video frame and a visible light image of the target video frame, and sending the medium wave infrared image of the target video frame, the long wave infrared image of the target video frame and the visible light image of the target video frame into a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier in a one-to-one correspondence manner to classify and extract target characteristic information to obtain a medium wave classification result of the target video frame, a long wave classification result of the target video frame and a visible light classification result of the target video frame;
a42: finding out a corresponding medium wave probability distribution function according to the medium wave classification result of the target video frame and the correct discrimination rate and the wrong discrimination rate of the medium wave infrared image classifier for each type; and finding out a corresponding long-wave probability distribution function according to the long-wave classification result of the target video frame and the correct judgment rate and the wrong judgment rate of the long-wave infrared image classifier for each type, and finding out a corresponding visible light probability distribution function according to the visible light classification result of the target video frame and the correct judgment rate and the wrong judgment rate of the visible light image classifier for each type.
The specific process of A5 is as follows:
a51: performing conflict analysis and judgment on the medium wave probability distribution function and the long wave probability distribution function, if the medium wave probability distribution function and the long wave probability distribution function conflict, switching to A53 if the medium wave probability distribution function and the long wave probability distribution function conflict, and switching to A52 if the medium wave probability distribution function and the long wave probability distribution function do not conflict;
a52: synthesizing a medium wave probability distribution function and a long wave probability distribution function to form a medium probability distribution function, and then switching to A54 and A6;
a53: comparing the correct discrimination rates of the medium wave infrared image classifier and the long wave infrared image classifier, if the correct discrimination rate of the medium wave infrared image classifier is high, selecting a medium wave probability distribution function as a middle probability distribution function, if the correct discrimination rate of the long wave infrared image classifier is high, selecting a long wave probability distribution function as a middle probability distribution function, and then switching to A54 and A6;
a54: weighting, averaging and summing the correct discrimination rates of the medium wave infrared image classifier and the long wave infrared image classifier to obtain a middle correct discrimination rate, and switching to A6;
the specific process of A6 is as follows:
a61: performing conflict analysis and judgment on the intermediate probability distribution function and the visible light probability distribution function, if the intermediate probability distribution function and the visible light probability distribution function conflict, switching to A63 if the intermediate probability distribution function and the visible light probability distribution function conflict, and switching to A62 if the intermediate probability distribution function and the visible light probability distribution function do not conflict;
a62: synthesizing the intermediate probability distribution function and the visible light probability distribution function to form a final probability distribution function, and then switching to A7;
a63: and comparing the correct discrimination rate of the visible light image classifier with the intermediate correct discrimination rate, if the intermediate correct discrimination rate is high, selecting the intermediate probability distribution function as a final probability distribution function, if the correct discrimination rate of the visible light image classifier is high, selecting the probability distribution function corresponding to the visible light image classifier as the final probability distribution function, and switching to A7.
The specific process of A7 is as follows: when the final probability distribution function is larger than the threshold value, target characteristic information is superposed on the target video frame medium wave infrared image, the target video frame long wave infrared image and the target video frame visible light image to obtain a final image, the final image and the target attribute classification result are transmitted to a display output module to be respectively displayed and output an identification result, and the target characteristic information comprises: target size, outline, texture, circumscribed rectangle information.
The specific process of synthesizing the medium wave probability distribution function and the long wave probability distribution function to form the intermediate probability distribution function is as follows: and obtaining a medium wave probability distribution function and a long wave probability distribution function, and obtaining a medium probability distribution function according to a probability distribution function synthesis rule of corrected D-S, Bayes and fuzzy inference.
The specific process of synthesizing the intermediate probability distribution function and the visible light probability distribution function to form the final probability distribution function is as follows: and obtaining an intermediate probability distribution function and a visible light probability distribution function, and obtaining a final probability distribution function according to a probability distribution function synthesis rule of modified D-S, Bayes and fuzzy inference.
The design principle of the invention is as follows: the sensors with different wave bands have different imaging characteristics, and the imaging spectrum band covers the wave bands of visible light, millimeter waves, infrared light and the like. The visible light image has rich detail and color information and higher resolution, but is easily affected by conditions such as weather and time. The image resolution of the infrared band is low, the details are not enough, but the infrared band can work all weather, and the anti-jamming capability is strong. From the aspect of the target, the contour feature of the target in the long-wave infrared image is clear, the contour feature of the target in the medium-wave infrared image is not clear, but the layering sense of the target high temperature zone is strong. In the system, the long-wave infrared sensor, the medium-wave infrared sensor and the visible light sensor are used for fusion recognition, information complementation among images can be fully utilized, and a model which describes a target object more comprehensively is constructed, so that the comprehension degree of the images and the reliability of the information can be enhanced, and the detection probability of the target and the identification accuracy of the target attribute can be improved. How to construct a reasonable fusion recognition model based on long-wave infrared, medium-wave infrared and visible light sensors is the most critical core, and the research of the invention finds that: the medium wave infrared image and the long wave infrared image are fused to supplement a training sample, and data processing is performed on the basis of the supplemented training sample, so that classification and division can be performed on the visible light image with abundant detail and color information and high resolution, and by using respective advantages of the medium wave infrared image and the long wave infrared image, data with the maximum respective advantages are fused, and accordingly information loss can be reduced to the maximum extent. The method comprises the steps of constructing 3 classifiers, taking a fused image of a medium-wave infrared image and a long-wave infrared image as a supplementary training sample, enabling an original single training sample to have the outstanding characteristics of respective advantages of the medium-wave infrared image and the long-wave infrared image due to the fact that the fused image is a utilized characteristic early fusion technology, increasing the probability of respective advantage information of the medium-wave infrared image and the long-wave infrared image, synthesizing or selecting a probability function with obvious advantages as a final probability function by utilizing the probability function, enabling the 3 probability functions to participate in information operation during judgment, and equivalently, particularly emphasizing the design of the advantage proportion of the infrared imaging on the basis of a high-resolution image.
Compared with the prior art, the invention has the following advantages and beneficial effects: according to the multi-band image fusion recognition method, the image information of each band image and the information provided by the preliminary classification conclusion of each band classifier are fully utilized and converted into the basic probability assignment of various conclusions, so that the loss of information is reduced, the probability assignment is more reasonable by the fusion system, the synthesis problem in evidence conflict is solved, the target recognition accuracy is improved, and the target recognition accuracy requirement of the multi-band detection system is met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following examples, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not to be construed as limiting the present invention.
Example one
As shown in figure 1 of the drawings, in which,
a multi-band image fusion identification method comprises the following steps:
a1: respectively carrying out sample training on training samples of the medium-wave infrared image, the long-wave infrared image and the visible light image, and respectively generating a medium-wave infrared image classifier, a long-wave infrared image classifier and a visible light image classifier in a one-to-one correspondence manner;
a2: performing feature fusion on the medium-wave infrared image and the long-wave infrared image, reconstructing the images to obtain a fusion image, and combining the fusion image and the training sample to form a fusion training sample;
a3: classifying the fusion training sample by adopting a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier, and obtaining the correct discrimination rate, the wrong discrimination rate and the rejection discrimination rate of each classifier to each class;
a4: obtaining a medium wave infrared image of a target video frame, a long wave infrared image of the target video frame and a visible light image of the target video frame in a medium wave infrared shooting mode, a long wave infrared shooting mode and a visible light shooting mode, respectively and correspondingly sending the medium wave infrared image, the long wave infrared image and the visible light image into a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier to classify and extract target characteristic information, obtaining classification results, and analyzing the corresponding classification results and the correct discrimination rate of the corresponding classifier to obtain a probability distribution function of each classifier;
a5: selecting probability distribution functions of any 2 classifiers to perform conflict judgment, and then obtaining a middle probability distribution function in a synthesis or selection mode;
a6: after collision judgment is carried out on the probability distribution functions of the intermediate probability distribution function and the rest classifiers, a final probability distribution function is obtained in a synthesis or selection mode;
a7: and outputting the identification result when the final probability distribution function is larger than the threshold value.
The specific process of A1 is as follows:
a11: respectively adopting a medium wave infrared camera, a long wave infrared camera and a visible light camera to correspondingly obtain a medium wave infrared image, a long wave infrared image and a visible light image,
a12: respectively preprocessing the medium wave infrared image, the long wave infrared image and the visible light image, extracting target characteristic information, detecting and tracking a target according to target characteristics, acquiring a target area to form a training sample, and generating a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier in a one-to-one correspondence mode after the training sample is subjected to sample training.
The specific process of A2 is as follows:
a21: after image registration and filtering denoising pretreatment are carried out on the medium wave infrared image and the long wave infrared image, target characteristic information is extracted, characteristic fusion is carried out, and a fusion image is obtained after image reconstruction;
a22: and combining the fused image and the training sample to form a fused training sample.
The specific process of A3 is as follows: loading a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier, respectively sending the fusion training sample into the medium wave infrared image classifier, the long wave infrared image classifier and the visible light image classifier, obtaining a medium wave classification result, a long wave classification result and a visible light classification result of the fusion training sample, and obtaining a correct discrimination rate, an incorrect discrimination rate and a rejection discrimination rate of each class of the medium wave infrared image classifier, the long wave infrared image classifier and the visible light image classifier, respectively, wherein the correct discrimination rate of each class is regarded as the incorrect discrimination rate of other classes.
The specific process of A4 is as follows:
a41: respectively adopting a medium wave infrared camera, a long wave infrared camera and a visible light camera to correspondingly obtain a medium wave infrared image of a target video frame, a long wave infrared image of the target video frame and a visible light image of the target video frame, and sending the medium wave infrared image of the target video frame, the long wave infrared image of the target video frame and the visible light image of the target video frame into a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier in a one-to-one correspondence manner to classify and extract target characteristic information to obtain a medium wave classification result of the target video frame, a long wave classification result of the target video frame and a visible light classification result of the target video frame;
a42: finding out a corresponding medium wave probability distribution function according to the medium wave classification result of the target video frame and the correct discrimination rate and the wrong discrimination rate of the medium wave infrared image classifier for each type; and finding out a corresponding long-wave probability distribution function according to the long-wave classification result of the target video frame and the correct judgment rate and the wrong judgment rate of the long-wave infrared image classifier for each type, and finding out a corresponding visible light probability distribution function according to the visible light classification result of the target video frame and the correct judgment rate and the wrong judgment rate of the visible light image classifier for each type.
The specific process of A5 is as follows:
a51: performing conflict analysis and judgment on the medium wave probability distribution function and the long wave probability distribution function, if the medium wave probability distribution function and the long wave probability distribution function conflict, switching to A53 if the medium wave probability distribution function and the long wave probability distribution function conflict, and switching to A52 if the medium wave probability distribution function and the long wave probability distribution function do not conflict;
a52: synthesizing a medium wave probability distribution function and a long wave probability distribution function to form a medium probability distribution function, and then switching to A54 and A6;
a53: comparing the correct discrimination rates of the medium wave infrared image classifier and the long wave infrared image classifier, if the correct discrimination rate of the medium wave infrared image classifier is high, selecting a medium wave probability distribution function as a middle probability distribution function, if the correct discrimination rate of the long wave infrared image classifier is high, selecting a long wave probability distribution function as a middle probability distribution function, and then switching to A54 and A6;
a54: weighting, averaging and summing the correct discrimination rates of the medium wave infrared image classifier and the long wave infrared image classifier to obtain a middle correct discrimination rate, and switching to A6;
the specific process of A6 is as follows:
a61: performing conflict analysis and judgment on the intermediate probability distribution function and the visible light probability distribution function, if the intermediate probability distribution function and the visible light probability distribution function conflict, switching to A63 if the intermediate probability distribution function and the visible light probability distribution function conflict, and switching to A62 if the intermediate probability distribution function and the visible light probability distribution function do not conflict;
a62: synthesizing the intermediate probability distribution function and the visible light probability distribution function to form a final probability distribution function, and then switching to A7;
a63: and comparing the correct discrimination rate of the visible light image classifier with the intermediate correct discrimination rate, if the intermediate correct discrimination rate is high, selecting the intermediate probability distribution function as a final probability distribution function, if the correct discrimination rate of the visible light image classifier is high, selecting the probability distribution function corresponding to the visible light image classifier as the final probability distribution function, and switching to A7.
The specific process of A7 is as follows: when the final probability distribution function is larger than the threshold value, target characteristic information is superposed on the target video frame medium wave infrared image, the target video frame long wave infrared image and the target video frame visible light image to obtain a final image, the final image and the target attribute classification result are transmitted to a display output module to be respectively displayed and output an identification result, and the target characteristic information comprises: target size, outline, texture, circumscribed rectangle information.
The specific process of synthesizing the medium wave probability distribution function and the long wave probability distribution function to form the intermediate probability distribution function is as follows: and obtaining a medium wave probability distribution function and a long wave probability distribution function, and obtaining a medium probability distribution function according to a probability distribution function synthesis rule of corrected D-S, Bayes and fuzzy inference.
The specific process of synthesizing the intermediate probability distribution function and the visible light probability distribution function to form the final probability distribution function is as follows: and obtaining an intermediate probability distribution function and a visible light probability distribution function, and obtaining a final probability distribution function according to a probability distribution function synthesis rule of modified D-S, Bayes and fuzzy inference.
As shown in figure 1 of the drawings, in which,
when extracting image features, reasonable characteristic information is needed, so the process of extracting feature information from an image is also given in fig. 1 of this embodiment as follows:
s1, initializing, reading the image to be recognized
S2, extracting the target feature in the image,
s3, and selecting respective correlation metric benchmarks of the features,
s4, generating respective image recognition evidence sets,
s5, according to the calculation of the entropy of the image target region,
s6, carrying out target region entropy detection, and when the entropy is smaller than a threshold value, turning to the step S2 to extract more target characteristic information; otherwise, the step S7 is executed,
and S7, calculating the gray level consistency of the image target area, detecting the gray level consistency of the target area, and when the local gray level consistency is smaller than a threshold value, turning to the step S2 to extract more target characteristic information.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A multi-band image fusion recognition method is characterized by comprising the following steps:
a1: respectively carrying out sample training on training samples of the medium-wave infrared image, the long-wave infrared image and the visible light image, and respectively generating a medium-wave infrared image classifier, a long-wave infrared image classifier and a visible light image classifier in a one-to-one correspondence manner;
a2: performing feature fusion on the medium-wave infrared image and the long-wave infrared image, reconstructing the images to obtain a fusion image, and combining the fusion image and the training sample to form a fusion training sample;
a3: classifying the fusion training sample by adopting a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier, and obtaining the correct discrimination rate, the wrong discrimination rate and the rejection discrimination rate of each classifier to each class;
a4: obtaining a medium wave infrared image of a target video frame, a long wave infrared image of the target video frame and a visible light image of the target video frame in a medium wave infrared shooting mode, a long wave infrared shooting mode and a visible light shooting mode, respectively and correspondingly sending the medium wave infrared image, the long wave infrared image and the visible light image into a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier to classify and extract target characteristic information, obtaining classification results, and analyzing the corresponding classification results and the correct discrimination rate of the corresponding classifier to obtain a probability distribution function of each classifier;
a5: selecting probability distribution functions of any 2 classifiers to perform conflict judgment, and then obtaining a middle probability distribution function in a synthesis or selection mode;
a6: after collision judgment is carried out on the probability distribution functions of the intermediate probability distribution function and the rest classifiers, a final probability distribution function is obtained in a synthesis or selection mode;
a7: and outputting the identification result when the final probability distribution function is larger than the threshold value.
2. The multiband image fusion recognition method according to claim 1, wherein:
the specific process of A1 is as follows:
a11: respectively adopting a medium wave infrared camera, a long wave infrared camera and a visible light camera to correspondingly obtain a medium wave infrared image, a long wave infrared image and a visible light image,
a12: respectively preprocessing the medium wave infrared image, the long wave infrared image and the visible light image, extracting target characteristic information, detecting and tracking a target according to target characteristics, acquiring a target area to form a training sample, and generating a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier in a one-to-one correspondence mode after the training sample is subjected to sample training.
3. The multiband image fusion recognition method according to claim 1, wherein:
the specific process of A2 is as follows:
a21: after image registration and filtering denoising pretreatment are carried out on the medium wave infrared image and the long wave infrared image, target characteristic information is extracted, characteristic fusion is carried out, and a fusion image is obtained after image reconstruction;
a22: and combining the fused image and the training sample to form a fused training sample.
4. The multiband image fusion recognition method according to claim 1, wherein:
the specific process of A3 is as follows: loading a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier, respectively sending the fusion training sample into the medium wave infrared image classifier, the long wave infrared image classifier and the visible light image classifier, obtaining a medium wave classification result, a long wave classification result and a visible light classification result of the fusion training sample, and obtaining a correct discrimination rate, an incorrect discrimination rate and a rejection discrimination rate of each class of the medium wave infrared image classifier, the long wave infrared image classifier and the visible light image classifier, respectively, wherein the correct discrimination rate of each class is regarded as the incorrect discrimination rate of other classes.
5. The multiband image fusion recognition method according to any one of claims 1 to 4, wherein:
the specific process of A4 is as follows:
a41: respectively adopting a medium wave infrared camera, a long wave infrared camera and a visible light camera to correspondingly obtain a medium wave infrared image of a target video frame, a long wave infrared image of the target video frame and a visible light image of the target video frame, and sending the medium wave infrared image of the target video frame, the long wave infrared image of the target video frame and the visible light image of the target video frame into a medium wave infrared image classifier, a long wave infrared image classifier and a visible light image classifier in a one-to-one correspondence manner to classify and extract target characteristic information to obtain a medium wave classification result of the target video frame, a long wave classification result of the target video frame and a visible light classification result of the target video frame;
a42: finding out a corresponding medium wave probability distribution function according to the medium wave classification result of the target video frame and the correct discrimination rate and the wrong discrimination rate of the medium wave infrared image classifier for each type; and finding out a corresponding long-wave probability distribution function according to the long-wave classification result of the target video frame and the correct judgment rate and the wrong judgment rate of the long-wave infrared image classifier for each type, and finding out a corresponding visible light probability distribution function according to the visible light classification result of the target video frame and the correct judgment rate and the wrong judgment rate of the visible light image classifier for each type.
6. The multiband image fusion recognition method according to claim 5, wherein:
the specific process of A5 is as follows:
a51: performing conflict analysis and judgment on the medium wave probability distribution function and the long wave probability distribution function, if the medium wave probability distribution function and the long wave probability distribution function conflict, switching to A53 if the medium wave probability distribution function and the long wave probability distribution function conflict, and switching to A52 if the medium wave probability distribution function and the long wave probability distribution function do not conflict;
a52: synthesizing a medium wave probability distribution function and a long wave probability distribution function to form a medium probability distribution function, and then switching to A54 and A6;
a53: comparing the correct discrimination rates of the medium wave infrared image classifier and the long wave infrared image classifier, if the correct discrimination rate of the medium wave infrared image classifier is high, selecting a medium wave probability distribution function as a middle probability distribution function, if the correct discrimination rate of the long wave infrared image classifier is high, selecting a long wave probability distribution function as a middle probability distribution function, and then switching to A54 and A6;
a54: weighting, averaging and summing the correct discrimination rates of the medium wave infrared image classifier and the long wave infrared image classifier to obtain a middle correct discrimination rate, and switching to A6;
the specific process of A6 is as follows:
a61: performing conflict analysis and judgment on the intermediate probability distribution function and the visible light probability distribution function, if the intermediate probability distribution function and the visible light probability distribution function conflict, switching to A63 if the intermediate probability distribution function and the visible light probability distribution function conflict, and switching to A62 if the intermediate probability distribution function and the visible light probability distribution function do not conflict;
a62: synthesizing the intermediate probability distribution function and the visible light probability distribution function to form a final probability distribution function, and then switching to A7;
a63: and comparing the correct discrimination rate of the visible light image classifier with the intermediate correct discrimination rate, if the intermediate correct discrimination rate is high, selecting the intermediate probability distribution function as a final probability distribution function, if the correct discrimination rate of the visible light image classifier is high, selecting the probability distribution function corresponding to the visible light image classifier as the final probability distribution function, and switching to A7.
7. The multiband image fusion recognition method according to any one of claims 1 to 4, wherein:
the specific process of A7 is as follows: when the final probability distribution function is larger than the threshold value, target characteristic information is superposed on the target video frame medium wave infrared image, the target video frame long wave infrared image and the target video frame visible light image to obtain a final image, the final image and the target attribute classification result are transmitted to a display output module to be respectively displayed and output an identification result, and the target characteristic information comprises: target size, outline, texture, circumscribed rectangle information.
8. The multiband image fusion recognition method of claim 6, wherein: the specific process of synthesizing the medium wave probability distribution function and the long wave probability distribution function to form the intermediate probability distribution function is as follows: and obtaining a medium wave probability distribution function and a long wave probability distribution function, and obtaining a medium probability distribution function according to a probability distribution function synthesis rule of corrected D-S, Bayes and fuzzy inference.
9. The multiband image fusion recognition method of claim 6, wherein: the specific process of synthesizing the intermediate probability distribution function and the visible light probability distribution function to form the final probability distribution function is as follows: and obtaining an intermediate probability distribution function and a visible light probability distribution function, and obtaining a final probability distribution function according to a probability distribution function synthesis rule of modified D-S, Bayes and fuzzy inference.
CN201711332225.7A 2017-12-13 2017-12-13 Multiband image fusion identification method Active CN108052976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711332225.7A CN108052976B (en) 2017-12-13 2017-12-13 Multiband image fusion identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711332225.7A CN108052976B (en) 2017-12-13 2017-12-13 Multiband image fusion identification method

Publications (2)

Publication Number Publication Date
CN108052976A CN108052976A (en) 2018-05-18
CN108052976B true CN108052976B (en) 2021-04-06

Family

ID=62132639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711332225.7A Active CN108052976B (en) 2017-12-13 2017-12-13 Multiband image fusion identification method

Country Status (1)

Country Link
CN (1) CN108052976B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921803B (en) * 2018-06-29 2020-09-08 华中科技大学 Defogging method based on millimeter wave and visible light image fusion
CN109492714B (en) * 2018-12-29 2023-09-15 同方威视技术股份有限公司 Image processing apparatus and method thereof
CN110987189B (en) * 2019-11-21 2021-11-02 北京都是科技有限公司 Method, system and device for detecting temperature of target object
CN111401321A (en) * 2020-04-17 2020-07-10 Oppo广东移动通信有限公司 Object recognition model training method and device, electronic equipment and readable storage medium
CN112070111B (en) * 2020-07-28 2023-11-28 浙江大学 Multi-target detection method and system adapting to multi-band image
CN113762277B (en) * 2021-09-09 2024-05-24 东北大学 Multiband infrared image fusion method based on Cascade-GAN
CN114359743B (en) * 2022-03-21 2022-06-21 华中科技大学 Low-slow small target identification method and system based on multiband

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592134A (en) * 2011-11-28 2012-07-18 北京航空航天大学 Multistage decision fusing and classifying method for hyperspectrum and infrared data
CN103984936A (en) * 2014-05-29 2014-08-13 中国航空无线电电子研究所 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592134A (en) * 2011-11-28 2012-07-18 北京航空航天大学 Multistage decision fusing and classifying method for hyperspectrum and infrared data
CN103984936A (en) * 2014-05-29 2014-08-13 中国航空无线电电子研究所 Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Pose Invariant Face Recognition Using Probability Distribution Functions in Different Color Channels;Hasan Demirel et al.;《IEEE Signal Processing Letters》;20080715;第15卷;第537-540页 *
基于模糊证据理论的多特征目标融合检测算法;王凤朝 等;《光学学报》;20100331;第30卷(第3期);第713-719页 *

Also Published As

Publication number Publication date
CN108052976A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN108052976B (en) Multiband image fusion identification method
CN107665324B (en) Image identification method and terminal
CN104598883B (en) Target knows method for distinguishing again in a kind of multiple-camera monitoring network
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN109934224B (en) Small target detection method based on Markov random field and visual contrast mechanism
KR20160143494A (en) Saliency information acquisition apparatus and saliency information acquisition method
Jain et al. Pixel objectness
CN109725721B (en) Human eye positioning method and system for naked eye 3D display system
Huynh-The et al. NIC: A robust background extraction algorithm for foreground detection in dynamic scenes
CN109190456B (en) Multi-feature fusion overlook pedestrian detection method based on aggregated channel features and gray level co-occurrence matrix
Zhang et al. Multi-features integration based hyperspectral videos tracker
CN111401278A (en) Helmet identification method and device, electronic equipment and storage medium
Wang et al. Visual saliency detection based on region descriptors and prior knowledge
CN109886195A (en) Skin identification method based on depth camera near-infrared single color gradation figure
CN111160194B (en) Static gesture image recognition method based on multi-feature fusion
CN111028263B (en) Moving object segmentation method and system based on optical flow color clustering
Lian et al. Matching of tracked pedestrians across disjoint camera views using CI-DLBP
Paul et al. Rotation invariant multiview face detection using skin color regressive model and support vector regression
CN115620066A (en) Article detection method and device based on X-ray image and electronic equipment
CN105354547A (en) Pedestrian detection method in combination of texture and color features
Guo et al. Person re-identification by weighted integration of sparse and collaborative representation
CN114037671A (en) Microscopic hyperspectral leukocyte detection method based on improved fast RCNN
CN111881924B (en) Dark-light vehicle illumination identification method combining illumination invariance and short-exposure illumination enhancement
Lou et al. Hierarchical co-salient object detection via color names
Wang et al. Saliency detection using mutual consistency-guided spatial cues combination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210618

Address after: 621000 building 31, No.7, Section 2, Xianren Road, Youxian District, Mianyang City, Sichuan Province

Patentee after: China Ordnance Equipment Group Automation Research Institute Co.,Ltd.

Address before: 621000 Mianyang province Sichuan City Youxian District Road No. 7 two immortals

Patentee before: China Ordnance Equipment Group Automation Research Institute

TR01 Transfer of patent right