CN116403104A - Multispectral sensing and target recognition method and device based on multivariate information fusion - Google Patents

Multispectral sensing and target recognition method and device based on multivariate information fusion Download PDF

Info

Publication number
CN116403104A
CN116403104A CN202310233861.3A CN202310233861A CN116403104A CN 116403104 A CN116403104 A CN 116403104A CN 202310233861 A CN202310233861 A CN 202310233861A CN 116403104 A CN116403104 A CN 116403104A
Authority
CN
China
Prior art keywords
target
information
spectrum
light
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310233861.3A
Other languages
Chinese (zh)
Inventor
李泓洋
樊奇林
王宜轩
李韶光
陈�峰
袁江波
张雪
潘宇
王莉
李萌萌
许伟
赵良
谢放
余卓阳
崔颖函
梁伟栋
王琳
张晶莹
李金磊
李�昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Near Space Vehicles System Engineering
Original Assignee
Beijing Institute of Near Space Vehicles System Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Near Space Vehicles System Engineering filed Critical Beijing Institute of Near Space Vehicles System Engineering
Priority to CN202310233861.3A priority Critical patent/CN116403104A/en
Publication of CN116403104A publication Critical patent/CN116403104A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • G01V8/20Detecting, e.g. by using light barriers using multiple transmitters or receivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1765Method using an image detector and processing of image signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Geophysics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

The method realizes multispectral measurement of the target, fuses information of different spectrums together in a manner of the multispectral information fusion, acquires the multispectral information of the target, gives confidence to the multispectral information, realizes the multispectral information fusion, improves the accuracy and the robustness of the target identification, and solves the problem of lack of information fusion.

Description

Multispectral sensing and target recognition method and device based on multivariate information fusion
Technical Field
The application relates to the technical field of optical detection and information fusion, in particular to a multispectral sensing and target identification method and device based on multivariate information fusion.
Background
The traditional intensity detection technology can only detect the radiation intensity of light, so that when the detected target is similar to the background color or the detected ambient light is weaker, the traditional intensity detection technology cannot meet the actual detection requirement. Compared with the traditional intensity detection technology, the multispectral polarization imaging detection technology has the advantages of obtaining the polarization information and the spectrum information of the object which cannot be obtained by the traditional detection technology, and the polarization information and the spectrum information of different objects are obviously different, so that the multispectral polarization imaging detection technology is beneficial to identifying the object hidden in the background.
Disclosure of Invention
In order to solve the problems of project depth space detection and weak and small target identification, the application provides a multispectral sensing and target identification method and device based on the fusion of multiple information, which effectively improves the deep space detection capability and the characteristic identification capability of targets, increases the discovery time of an aircraft on interception targets and greatly improves the anti-interception capability of the aircraft.
The technical scheme adopted by the application is as follows:
a multispectral sensing and target recognition method based on the fusion of multiple information comprises the following steps:
step 1, acquiring target spectrum information of four spectral bands of ultraviolet light, blue light, red light, infrared light and infrared light by adopting detectors of different spectral bands;
step 2, the confidence coefficient of each target spectrum information is assigned through the signal-to-noise ratio of the target spectrum information;
step 3, carrying out pixel level fusion of the spectrum information of each target through the confidence coefficient of the images in different spectrum segments to obtain a comprehensive gray level image;
step 4, acquiring characteristic pixels of the comprehensive gray level image based on gradient analysis, and identifying a target point;
step 5, resolving pixel coordinates and geometric coordinates of the target point to obtain target point information;
and 6, restoring the target point information into the spectrum image, and obtaining the result comprehensive confidence coefficient according to the confidence coefficient curve.
Further, in step 3, pixel-level fusion of each target spectrum information is performed, including:
the four spectral bands of ultraviolet light F1, blue light F2, red light F3 and infrared light F4 are selected for fuzzy degree evidence fusion, and the calculation formula is as follows:
Figure BDA0004121336100000021
wherein F1 represents spectral information of ultraviolet spectrum of a certain pixel of a target point, and m1 (F1) represents an ultraviolet spectrum confidence evaluation value of whether the target point is a real target; f2 represents spectrum information of a blue spectrum segment of a pixel of a target point, and m2 (F2) represents a blue spectrum segment confidence evaluation value of whether the target point is a real target; f3 represents spectrum information of a red spectrum segment of a pixel of the target point, and m3 (F3) is a red spectrum segment confidence evaluation value of whether the target point is a real target; f4 represents the spectral information of a certain pixel infrared spectrum segment of the target point, and m4 (F4) is an infrared spectrum segment confidence evaluation value of whether the target point is a real target; f is the multispectral synthesis result of ultraviolet light F1, blue light F2, red light F3 and infrared light F4, and meets the intersection result in the set theory, namely F=F1n_F2n_F3n_F4, wherein n is intersection operation, so that the comprehensive confidence M (F) of the target pixel level fusion is obtained.
Further, in step 6, obtaining the result comprehensive confidence according to the confidence curve includes:
after restoring the target point information to the spectrum image, acquiring a confidence coefficient curve according to the characteristics of the target point information, and calculating the result comprehensive confidence coefficient, wherein the calculation formula is as follows:
Figure BDA0004121336100000031
wherein R is the signal-to-noise ratio of a target point in the frequency bands of ultraviolet light F1, blue light F2, red light F3 and infrared light F4, and c1, c2, tau 1, tau 2, mu and sigma are characteristic coefficients.
Further, in step 1, for the target information, each spectrum information corresponding to the target information is acquired by using a different spectrum sensor.
Further, in step 2, calculating the signal-to-noise ratio of the images of each spectrum information, determining the confidence coefficient of the images of different spectrum sections according to the signal-to-noise ratio at the same moment, and assigning the confidence coefficient to each image;
the calculation formula of the image signal-to-noise ratio SNR is as follows:
snr= (dot target imaging region gray average-background imaging region gray average)/background region imaging gray mean square error;
and calculating the signal-to-noise ratio R corresponding to the frequency band images at the same moment at the target points of the ultraviolet light, blue light, red light and infrared light frequency bands at the respective frequency bands, and further determining the confidence degrees of the images in different spectral bands.
Further, the different spectrum ranges comprise ultraviolet light, blue light, red light and infrared light, and the fusion mode is to select four spectrum ranges of ultraviolet light, blue light, red light and infrared light for fusion.
Further, in step 4, calculating a gradient value of the comprehensive gray level image, and taking a larger value as a candidate target, wherein the candidate target only contains a small amount of real targets, so that the real targets in the image are obtained by combining the motion characteristics of the image at the front and rear moments;
the real target is a candidate target which continuously moves in the image and does not generate jump change.
Further, in step 5, the pixel coordinates of the target point are obtained, and the pixel coordinates are mapped to the space geometric coordinates through the mapping relationship, so as to obtain the target information.
Further, mapping the pixel coordinates to the space geometrical coordinates through the mapping relation includes:
when the pixel obtained in the image is [ u v ], then the obtained spatial set coordinates [ x y ] satisfy the following formula:
Figure BDA0004121336100000041
wherein a1, a2, a3, a4 are camera parameters.
A multispectral sensing and target recognition device based on a multivariate information fusion, the device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
Through the embodiment of the application, the following technical effects can be obtained:
(1) Carrying out multispectral information fusion on a target, and obtaining target image characteristics under different spectrum conditions to realize multispectral perception measurement;
(2) Carrying out multispectral pixel level fusion on an information target, acquiring a comprehensive gray level image of the target, confirming the gradient characteristics of the target, and realizing identification of the information-lack target;
(3) And carrying out confidence analysis based on the spectrum information, giving out confidence evaluation of the target, and realizing real-time online evaluation of the result reliability as a basis for false target elimination.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic flow chart of the method of the present invention;
fig. 2 is a schematic diagram of 3×3 region division of a target pixel.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
FIG. 1 is a schematic flow chart of the method of the present invention;
the method comprises the following steps:
step 1, aiming at target information, acquiring various spectrum information corresponding to the target information by adopting different spectrum sensors; the different spectral ranges comprise ultraviolet light, blue light, red light and infrared light;
step 2, calculating the image signal-to-noise ratio of each spectrum information, determining the confidence coefficient of the images in different spectral sections according to the signal-to-noise ratio R at the same moment, and assigning the confidence coefficient to each image, wherein the specific calculation mode of the image signal-to-noise ratio R of each spectrum information is as follows:
signal to noise ratio r= (point target imaging region gray average-background imaging region gray average)/background region imaging gray mean square error
The image signal-to-noise ratio R corresponding to the same moment of the frequency band image can be calculated through the target points of the ultraviolet light, blue light, red light and infrared light frequency bands in the respective frequency bands, and the comprehensive confidence coefficient of the images in different spectral bands is determined according to the image signal-to-noise ratio R at the same moment obtained by the formula (2), wherein the calculation formula of the comprehensive confidence coefficient is as follows:
Figure BDA0004121336100000061
wherein R is the signal-to-noise ratio of a target in the frequency band of ultraviolet light F1, blue light F2, red light F3 and infrared light F4, and c1, c2, tau 1, tau 2, mu and sigma are characteristic coefficients;
step 3, carrying out pixel level fusion based on confidence degrees of images in different spectral ranges, and acquiring comprehensive gray level images at the same moment; the pixel-level fusion method comprises the steps of selecting four spectral bands of ultraviolet light, blue light, red light and infrared light at the same moment to fuse, and obtaining a comprehensive gray level image of the pixel-level fusion through the following calculation formula;
Figure BDA0004121336100000062
wherein F1 represents spectral information of ultraviolet spectrum of a certain pixel of a target point, and m1 (F1) represents an ultraviolet spectrum confidence evaluation value of whether the target point is a real target; f2 represents spectrum information of a blue spectrum segment of a pixel of a target point, and m2 (F2) represents a blue spectrum segment confidence evaluation value of whether the target point is a real target; f3 represents spectrum information of a red spectrum segment of a pixel of the target point, and m3 (F3) is a red spectrum segment confidence evaluation value of whether the target point is a real target; f4 represents the spectral information of a certain pixel infrared spectrum segment of the target point, and m4 (F4) is an infrared spectrum segment confidence evaluation value of whether the target point is a real target; f is a multispectral synthesis result of ultraviolet light F1, blue light F2, red light F3 and infrared light F4, and meets the intersection result in the set theory, namely F=F1n_F2n_F3n_F4, wherein n is intersection operation, so that the comprehensive confidence M (F) of target pixel level fusion is obtained;
step 4, calculating gradient values of the comprehensive gray level image, taking a larger value as a candidate target, and removing false targets with discontinuous motion states by combining motion characteristics of the image at the front and rear moments to obtain real targets in the image;
in the step, the gradient value of each pixel in the scale space range of the comprehensive gray level image is calculated, a larger value is taken as a candidate target, and the candidate target only contains a small amount of real targets, so that the real targets in the image are obtained by combining the motion characteristics of the image at the front and rear moments, and the real targets are candidate targets which continuously move in the image and do not generate jump change.
Fig. 2 is a schematic diagram of 3×3 region division of a target pixel, and in combination with the diagram, the calculation of gradient values of a composite gray-scale image will be described, including:
selecting a scale space, taking a target pixel as a center, dividing a3 multiplied by 3 pixel grid, calculating the gray level of each grid, and marking the gray level as U= { U0, U1, U2, U3, U4, U5, U6, U7, U8};
8 elements u0, u1, u2, u3, u5, u6, u7 and u8 at other positions are subtracted from an element u4 at the middle position respectively, and absolute values are obtained and are marked as Z= { |u4-u0|, |u4-u1|, |u4-u2|, |u4-u3|, |u4-u5|, |u4-u6|, |u4-u7|, |u4-u 8|;
the value of the largest element in the vector Z is selected as the gradient value of the target pixel, namely:
T=max{|u4-u0|,|u4-u1|,|u4-u2|,|u4-u3|,|u4-u5|,|u4-u6|,|u4-u7|,|u4-u8|}。
step 5, obtaining pixel coordinates of a target point, mapping the pixel coordinates to space geometric coordinates through a mapping relation, obtaining target geometric coordinate information, and calculating a space set coordinate [ x y ] by the following formula when a target pixel obtained in an image is [ uv ];
Figure BDA0004121336100000081
wherein a1, a2, a3, a4 are camera parameters.
Step 6, restoring the target point information into the spectrum image, and obtaining the result comprehensive confidence coefficient according to the confidence coefficient curve;
in the step, the average value R of the ultraviolet target signal-to-noise ratio R1, the blue light target signal-to-noise ratio R2, the red light target signal-to-noise ratio R3 and the infrared target signal-to-noise ratio R4 is obtained avr And the comprehensive confidence of the result is obtained,
the specific process is as follows: after restoring the target point information to the spectrum image, acquiring a confidence coefficient curve according to the characteristics of the target point information, and calculating the result comprehensive confidence coefficient, wherein the calculation formula is as follows:
Figure BDA0004121336100000082
wherein R is the signal-to-noise ratio of a target point in the frequency bands of ultraviolet light F1, blue light F2, red light F3 and infrared light F4, and c1, c2, tau 1, tau 2, mu and sigma are characteristic coefficients.
The functions described above in this application may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), etc.
Moreover, although operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or apparatus logic acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. The multispectral sensing and target recognition method based on the multivariate information fusion is characterized by comprising the following steps of:
step 1, acquiring target spectrum information of four spectral bands of ultraviolet light, blue light, red light, infrared light and infrared light by adopting detectors of different spectral bands;
step 2, the confidence coefficient of each target spectrum information is assigned through the signal-to-noise ratio of the target spectrum information;
step 3, carrying out pixel level fusion of the spectrum information of each target through the confidence coefficient of the images in different spectrum segments to obtain a comprehensive gray level image;
step 4, acquiring characteristic pixels of the comprehensive gray level image based on gradient analysis, and identifying a target point;
step 5, resolving pixel coordinates and geometric coordinates of the target point to obtain target point information;
and 6, restoring the target point information into the spectrum image, and obtaining the result comprehensive confidence coefficient according to the confidence coefficient curve.
2. The method of claim 1, wherein in step 3, performing pixel-level fusion of each target spectral information comprises:
the four spectral bands of ultraviolet light F1, blue light F2, red light F3 and infrared light F4 are selected for fuzzy degree evidence fusion, and the calculation formula is as follows:
Figure FDA0004121336080000011
wherein F1 represents spectral information of ultraviolet spectrum of a certain pixel of a target point, and m1 (F1) represents an ultraviolet spectrum confidence evaluation value of whether the target point is a real target; f2 represents spectrum information of a blue spectrum segment of a pixel of a target point, and m2 (F2) represents a blue spectrum segment confidence evaluation value of whether the target point is a real target; f3 represents spectrum information of a red spectrum segment of a pixel of the target point, and m3 (F3) is a red spectrum segment confidence evaluation value of whether the target point is a real target; f4 represents the spectral information of a certain pixel infrared spectrum segment of the target point, and m4 (F4) is an infrared spectrum segment confidence evaluation value of whether the target point is a real target; f is the multispectral synthesis result of ultraviolet light F1, blue light F2, red light F3 and infrared light F4, and meets the intersection result in the set theory, namely F=F1n_F2n_F3n_F4, wherein n is intersection operation, so that the comprehensive confidence M (F) of the target pixel level fusion is obtained.
3. The method of claim 1, wherein in step 6, obtaining the resultant integrated confidence level based on the confidence level curve comprises:
after restoring the target point information to the spectrum image, acquiring a confidence coefficient curve according to the characteristics of the target point information, and calculating the result comprehensive confidence coefficient, wherein the calculation formula is as follows:
Figure FDA0004121336080000021
wherein R is the signal-to-noise ratio of a target point in the frequency bands of ultraviolet light F1, blue light F2, red light F3 and infrared light F4, and c1, c2, tau 1, tau 2, mu and sigma are characteristic coefficients.
4. The method according to claim 1, characterized in that in step 1, for target information, different spectral band sensors are used to acquire respective spectral information corresponding to the target information.
5. The method according to claim 4, wherein in step 2, the signal-to-noise ratio of the images of each spectrum information is calculated, and the confidence level of the images of different spectrum segments is determined according to the signal-to-noise ratio at the same time, and assigned to each image;
the calculation formula of the image signal-to-noise ratio SNR is as follows:
snr= (dot target imaging region gray average-background imaging region gray average)/background region imaging gray mean square error;
and calculating the signal-to-noise ratio R corresponding to the frequency band images at the same moment at the target points of the ultraviolet light, blue light, red light and infrared light frequency bands at the respective frequency bands, and further determining the confidence degrees of the images in different spectral bands.
6. A method according to claim 3, wherein the different spectral bands include ultraviolet light, blue light, red light, and infrared light, and the fusion is performed by selecting four spectral bands of ultraviolet light, blue light, red light, and infrared light.
7. The method according to claim 4, wherein in step 4, gradient values of the integrated gray scale image are calculated, and a larger value is taken as a candidate target, wherein the candidate target only contains a small amount of real targets, so that the real targets in the image are obtained by combining the motion characteristics of the image at the front and rear moments;
the real target is a candidate target which continuously moves in the image and does not generate jump change.
8. The method according to claim 5, wherein in step 5, the pixel coordinates of the target point are acquired, and the pixel coordinates are mapped to the space geometrical coordinates by the mapping relation, and the target information is acquired.
9. The method of claim 8, wherein mapping pixel coordinates to spatial geometric coordinates by a mapping relationship comprises:
when the pixel obtained in the image is [ uv ], then the obtained spatial set coordinates [ xy ] satisfy the following formula:
Figure FDA0004121336080000031
wherein a1, a2, a3, a4 are camera parameters.
10. A multispectral sensing and target recognition device based on multivariate information fusion, the device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
CN202310233861.3A 2023-03-03 2023-03-03 Multispectral sensing and target recognition method and device based on multivariate information fusion Pending CN116403104A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310233861.3A CN116403104A (en) 2023-03-03 2023-03-03 Multispectral sensing and target recognition method and device based on multivariate information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310233861.3A CN116403104A (en) 2023-03-03 2023-03-03 Multispectral sensing and target recognition method and device based on multivariate information fusion

Publications (1)

Publication Number Publication Date
CN116403104A true CN116403104A (en) 2023-07-07

Family

ID=87006557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310233861.3A Pending CN116403104A (en) 2023-03-03 2023-03-03 Multispectral sensing and target recognition method and device based on multivariate information fusion

Country Status (1)

Country Link
CN (1) CN116403104A (en)

Similar Documents

Publication Publication Date Title
Zhou et al. A surface defect detection framework for glass bottle bottom using visual attention model and wavelet transform
Xia et al. Infrared small target detection based on multiscale local contrast measure using local energy factor
US20220067514A1 (en) Inference apparatus, method, non-transitory computer readable medium and learning apparatus
JPS63118889A (en) Change detection system by picture
CN107203743B (en) Face depth tracking device and implementation method
CN108648184A (en) A kind of detection method of remote sensing images high-altitude cirrus
CN111492198B (en) Object shape measuring apparatus and method, and program
CN109657717A (en) A kind of heterologous image matching method based on multiple dimensioned close packed structure feature extraction
CN114399882A (en) Fire source detection, identification and early warning method for fire-fighting robot
CN111368756A (en) Visible light-based method and system for quickly identifying open fire smoke
CN107742114B (en) Hyperspectral image feature detection method and device
CA3132115C (en) Method and system for defect detection in image data of a target coating
CN116403104A (en) Multispectral sensing and target recognition method and device based on multivariate information fusion
CN115375991A (en) Strong/weak illumination and fog environment self-adaptive target detection method
CN115406414A (en) Dynamic target measurement on-orbit illumination evaluation method for space station mechanical arm
CN109781259B (en) Method for accurately measuring infrared spectrum of small aerial moving target through spectrum correlation
Yin et al. Learning based visibility measuring with images
UrRehman et al. Inspection on infrared-based image processing
EP3755968A1 (en) Image processing system for inspecting object distance and dimensions using a hand-held camera with a collimated laser
CN117078608B (en) Double-mask guide-based high-reflection leather surface defect detection method
CN116379937B (en) Method and device for monitoring shaking of power transmission tower
CN103229497B (en) For the method and apparatus for the screen window effect for estimating image detection device
CN116843883A (en) Dark target detection method and system based on dark transformation characteristic constant change
Liu Optimization of Single Image Imaging At Night
WO2016128700A1 (en) Processing multispectral images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination