CN111724333A - Infrared image and visible light image fusion method based on early visual information processing - Google Patents

Infrared image and visible light image fusion method based on early visual information processing Download PDF

Info

Publication number
CN111724333A
CN111724333A CN202010516394.1A CN202010516394A CN111724333A CN 111724333 A CN111724333 A CN 111724333A CN 202010516394 A CN202010516394 A CN 202010516394A CN 111724333 A CN111724333 A CN 111724333A
Authority
CN
China
Prior art keywords
image
visible light
light image
receptive field
infrared image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010516394.1A
Other languages
Chinese (zh)
Other versions
CN111724333B (en
Inventor
高绍兵
谭敏洁
魏伟
彭舰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202010516394.1A priority Critical patent/CN111724333B/en
Publication of CN111724333A publication Critical patent/CN111724333A/en
Application granted granted Critical
Publication of CN111724333B publication Critical patent/CN111724333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an infrared image and visible light image fusion method based on early visual information processing, which comprises the following steps: s1, processing dynamic receptive fields of the On central neurons and the Off central neurons; s2, obtaining a fused image A and a fused image B: fusing the On central neuron response of the visible light image, the Off central neuron response of the infrared image and the original visible light image to obtain an image A; fusing the On central neuron response of the infrared image, the Off central neuron response of the visible light image and the original infrared image to obtain an image B: and S3, adding the fused images A and B to obtain a final fusion result. The invention can effectively fuse the significant target information in the infrared image and the background information in the visible light image, and provides more effective characteristics for computer vision tasks under night vision conditions such as follow-up high-value target detection and identification.

Description

Infrared image and visible light image fusion method based on early visual information processing
Technical Field
The invention belongs to the technical field of computer vision and image processing, relates to fusion of an infrared image and a visible light image, and particularly relates to an infrared image and visible light image fusion method based on early visual information processing.
Background
The fusion of the infrared image and the visible light image is an important image enhancement technology, and the characteristics of different wave bands are fused together to obtain an image with richer information. The visible image always shows the background in great detail, while the infrared image clearly shows a possible disguised object for visual detection and identification. The fusion of the visible light image and the infrared image may show more information than a single image. For decades, the image fusion problem has been developed under different schemes, wherein representative technical routes include a multi-scale feature decomposition-based method, a sparse representation-based method, a significant feature extraction-based method, and other advanced fusion methods. In recent years, with the rise of deep learning, there is a method of applying a neural network to image fusion, and most of the methods obtain more reliable results.
For most fusion methods, most of them are inspired by non-biological visual mechanisms. Waxman et al, simulates the color antagonism and spatial antagonism information processing mechanisms of the biological vision system to fuse two gray level source images into a color image. Reference documents: waxman AM, Gove A N, Fay D A, et al, color right vision: open processing in the fusion of vision and IR image [ J ]. Neural Networks,1997,10(1):1-6.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the infrared image and visible light image fusion method based on early visual information processing, which can effectively fuse the significant target information in the infrared image and the background information in the visible light image to obtain a fusion image with more abundant information and provide more effective characteristics for computer visual tasks under night vision conditions such as subsequent high-value target detection and identification.
The purpose of the invention is realized by the following technical scheme: the method for fusing the infrared image and the visible light image based on the early visual information processing comprises the following steps:
s1, dynamic receptive field processing of On-center neurons and Off-center neurons: carrying out convolution filtering processing on the input infrared image and the input visible light image by using a dynamic receptive field model;
s2, obtaining a fused image A and a fused image B: fusing the On central neuron response of the visible light image, the Off central neuron response of the infrared image and the original visible light image according to the contrast level of the visible light image to obtain a fused image A; fusing the On central neuron response of the infrared image, the Off central neuron response of the visible light image and the original infrared image according to the contrast level of the infrared image to obtain a fused image B:
and S3, adding the fused images A and B to obtain the final fusion result of the infrared image and the visible light image.
Further, the step S1 is specifically implemented as follows: the method comprises the following steps of performing convolution filtering processing on an input infrared image and an input visible light image by using a dynamic receptive field model, and specifically comprises the following steps:
Figure BDA0002530267910000021
Figure BDA0002530267910000022
Figure BDA0002530267910000023
Figure BDA0002530267910000024
wherein ,
Figure BDA0002530267910000025
and
Figure BDA0002530267910000026
respectively represent the response of the visible light image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron,
Figure BDA0002530267910000027
and
Figure BDA0002530267910000028
respectively representing the response of the infrared image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron, wherein the superscript symbols V and IR respectively represent a visible light image and an infrared image, and x and y represent the coordinates of a pixel; CRF (x, y) and SRF (x, y) represent responses of the dynamic receptive field central receptive field and the peripheral receptive field, respectively, b (x, y) represents a weight of the peripheral receptive field, and symbol max represents a value equal to or greater than 0 for the response results after the dynamic receptive field of the On-centered neuron and the dynamic receptive field of the Off-centered neuron are processed;
the central receptive field response CRF (x, y) is calculated by the following equation:
CRF(x,y)=I(x,y)*DRF(x,y;d) (5)
in formula (5), I (x, y) represents an input visible light image or infrared image, and DRF (x, y; d) represents a dynamic gaussian filter:
Figure BDA0002530267910000029
d in the formula (6) represents the dimension of the dynamic Gaussian filter, and the value range of d is { sigma, lambda sigma }; the parameter σ represents the standard deviation of the gaussian function, and takes all real numbers in a range of [0.5, + ∞); λ represents the maximum dimension that can be achieved by controlling the dynamic gaussian filter, and all real numbers with a numeric range of [1, + ∞ ];
the dimension d of the dynamic gaussian filter is linked to the local contrast of the image:
d∝ΔI-1(x,y;σ) (7)
equation (7) shows that the dimension d of the dynamic Gaussian filter is inversely proportional to the local contrast Δ I (x, y; σ) of the image;
the response SRF (x, y) of the peripheral receptive field is calculated specifically from the following equation:
SRF(x,y)=I(x,y)*DRF(x,y;3σ) (8)
the weight b (x, y) of the peripheral receptive field is related to the local contrast of the image:
b(x,y)∝ΔI-1(x,y;3σ) (9)
equation (9) indicates that the weight b (x, y) of the peripheral receptive field is inversely proportional to the local contrast Δ I (x, y; 3 σ) of the image.
Further, the specific calculation method of the local contrasts Δ I (x, y; σ) and Δ I (x, y; 3 σ) is to select local regions having region sizes of σ and 3 σ around each pixel in the image I (x, y), and the local standard deviation calculated with the pixel as the center is taken as the local contrast of the pixel.
Further, the specific implementation method of step S2 is as follows: simulating cortical and subcortical visual information fusion mechanism of biological visual system, and responding On central neuron of visible light image
Figure BDA0002530267910000031
Off-centric neuronal response of infrared images
Figure BDA0002530267910000032
And the original visible light image IVAnd (x, y) fusing the three components according to the contrast level of the visible light image to obtain a fused image A:
Figure BDA0002530267910000033
similarly, the On-centered neuron response of the infrared image
Figure BDA0002530267910000034
Off-centered neuronal response of visible light images
Figure BDA0002530267910000035
And an original infrared image IIRAnd (x, y) fusing the three components according to the contrast level of the infrared image to obtain a fused image B:
Figure BDA0002530267910000036
wherein the weight parameter β is calculated according to the following formula:
Figure BDA0002530267910000037
Figure BDA0002530267910000038
representing the average, parameter, of the local contrast Δ I (x, y; σ)
Figure BDA0002530267910000039
For controlling the slope of equation (12).
Further, the parameters in the step S2
Figure BDA00025302679100000310
Is any real number within the range of [0.5, + ∞).
Further, the specific implementation method of the image fusion in step S3 is as follows: images a and B are added according to the following formula: (A (x, y) + B (x, y))/2.
The invention has the beneficial effects that: the method can realize the on-line real-time fusion and parameter self-adaptation of the infrared image and the visible light image, and can effectively fuse the significant target information in the infrared image and the background information in the visible light image to obtain a fused image with richer information. As a multi-source information fusion method, the method can be embedded in equipment such as a night vision device and the like, is applied to effective detection and acquisition of high-value targets in a night vision environment, and provides more effective characteristics for computer vision tasks under night vision conditions such as subsequent detection and identification of the high-value targets.
Drawings
FIG. 1 is a flow chart of an image fusion method of the present invention;
FIG. 2 is an original visible light image and an infrared image used in the present embodiment;
FIG. 3 is a diagram illustrating the results of processing the original visible light image and the infrared image using the dynamic receptive field according to the present embodiment;
FIG. 4 is a fused image A and a fused image B;
fig. 5 is a fused image finally obtained in the present embodiment.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, the method for fusing an infrared image and a visible light image based on early visual information processing of the present invention includes the following steps:
s1, dynamic receptive field processing of On-center neurons and Off-center neurons: carrying out convolution filtering processing on the input infrared image and the input visible light image by using a dynamic receptive field model; the specific implementation method comprises the following steps: the method comprises the following steps of performing convolution filtering processing on an input infrared image and an input visible light image by using a dynamic receptive field model, and specifically comprises the following steps:
Figure BDA0002530267910000041
Figure BDA0002530267910000042
Figure BDA0002530267910000043
Figure BDA0002530267910000044
wherein ,
Figure BDA0002530267910000045
and
Figure BDA0002530267910000046
respectively represent the response of the visible light image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron,
Figure BDA0002530267910000047
and
Figure BDA0002530267910000048
respectively representing the response of the infrared image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron, wherein the superscript symbols V and IR respectively represent a visible light image and an infrared image, and x and y represent the coordinates of a pixel; CRF (x, y) and SRF (x, y) represent responses of the dynamic receptive field central receptive field and the peripheral receptive field, respectively, b (x, y) represents a weight of the peripheral receptive field, and symbol max represents a value equal to or greater than 0 for the response results after the dynamic receptive field of the On-centered neuron and the dynamic receptive field of the Off-centered neuron are processed;
the central receptive field response CRF (x, y) is calculated by the following equation:
CRF(x,y)=I(x,y)*DRF(x,y;d) (5)
in formula (5), I (x, y) represents an input visible light image or infrared image, and DRF (x, y; d) represents a dynamic gaussian filter:
Figure BDA0002530267910000051
d in the formula (6) represents the dimension of the dynamic Gaussian filter, and the value range of d is { sigma, lambda sigma }; the parameter σ represents the standard deviation of the gaussian function, and takes all real numbers in a range of [0.5, + ∞); λ represents the maximum dimension that can be achieved by controlling the dynamic gaussian filter, and all real numbers with a numeric range of [1, + ∞ ];
the dimension d of the dynamic gaussian filter is linked to the local contrast of the image:
d∝ΔI-1(x,y;σ) (7)
equation (7) shows that the dimension d of the dynamic Gaussian filter is inversely proportional to the local contrast Δ I (x, y; σ) of the image; examples of which are as follows: the local contrast is normalized to be in the range of [0,1] and is divided into N levels, and the value range of N is all integers of [2,64 ]. Here, taking N as 3 as an example, the normalized local contrast is divided into [0,13], [13,23], [23,1]3 levels. Since d belongs to { σ, λ σ }, where λ ═ 3 is taken as an example, the value of d is 3 σ in the range of local contrast [0,13], d is 2 σ in the range of local contrast [13,23], d is σ in the range of local contrast [23,1], that is, the value of d is inversely proportional to the local contrast of the image, and the smaller the local contrast is, the larger the value of d is, and vice versa.
The response SRF (x, y) of the peripheral receptive field is calculated specifically from the following equation:
SRF(x,y)=I(x,y)*DRF(x,y;3σ) (8)
the weight b (x, y) of the peripheral receptive field is related to the local contrast of the image:
b(x,y)∝ΔI-1(x,y;3σ) (9)
formula (9) shows that the weight b (x, y) of the peripheral receptive field is inversely proportional to the local contrast delta I (x, y; 3 sigma) of the image, and the value range of b (x, y) is [ -1,0 ]; examples of which are as follows: the local contrast is normalized to be in the range of [0,1] and is divided into N levels, and the value range of N is all integers of [2,64 ]. Here, taking N-3 as an example, the normalized local contrast is divided into three levels of [0,1/3], [1/3,2/3], [2/3,1 ]. Since b (x, y) is [ -1,0], the value of b (x, y) is-1/3 in the local contrast [0,13], 2/3 in the local contrast [1/3,2/3], and-1 in the local contrast [2/3,1], that is, bx (, y) is inversely proportional to the local contrast of the image, the larger the local contrast, the smaller the value of b (x, y), and vice versa.
The specific calculation method of the local contrasts DeltaI (x, y; sigma) and DeltaI (x, y; 3 sigma) is to select local regions with the region sizes of sigma and 3 sigma respectively around each pixel in the image I (x, y), and the local standard deviation calculated by taking the pixel as the center is used as the local contrast of the pixel.
S2, obtaining a fused image A and a fused image B: fusing the On central neuron response of the visible light image, the Off central neuron response of the infrared image and the original visible light image according to the contrast level of the visible light image to obtain a fused image A; fusing the On central neuron response of the infrared image, the Off central neuron response of the visible light image and the original infrared image according to the contrast level of the infrared image to obtain a fused image B:
the specific implementation method comprises the following steps: simulating cortical and subcortical visual information fusion mechanism of biological visual system, and responding On central neuron of visible light image
Figure BDA0002530267910000061
Off-centric neuronal response of infrared images
Figure BDA0002530267910000062
And the original visible light image IVAnd (x, y) fusing the three components according to the contrast level of the visible light image to obtain a fused image A:
Figure BDA0002530267910000063
similarly, the On-centered neuron response of the infrared image
Figure BDA0002530267910000064
Off-centered neuronal response of visible light images
Figure BDA0002530267910000065
And an original infrared image IIRAnd (x, y) fusing the three components according to the contrast level of the infrared image to obtain a fused image B:
Figure BDA0002530267910000066
wherein the weight parameter β is calculated according to the following formula:
Figure BDA0002530267910000067
Figure BDA0002530267910000068
representing the average, parameter, of the local contrast Δ I (x, y; σ)
Figure BDA0002530267910000069
For controlling the slope, parameters of equation (12)
Figure BDA00025302679100000610
Is any real number within the range of [0.5, + ∞).
S3, adding the fused images A and B to obtain a final fusion result of the infrared image and the visible light image; the specific implementation method comprises the following steps: images a and B are added according to the following formula: (A (x, y) + B (x, y))/2.
The present embodiment uses an infrared image and a visible light image (as shown in fig. 2, the images are derived from a TNO image dataset and are named "capture", and (a), (b) are the visible light image and the infrared image, respectively) with an image size of 270 × 360, and the parameter settings in the present embodiment are as follows, σ is 0.5, λ is 3,
Figure BDA0002530267910000071
n is 3. The result of the dynamic receptive field processing using the On-center neurons and the Off-center neurons for a pair of the visible light image and the infrared image named "vegeation" in step S1 (equations (1) to (4)):
Figure BDA0002530267910000072
as shown in the images (a), (B), (c), and (d) in fig. 3, the images a and B are calculated according to the equations (10) to (11), and the results a (x, y) (shown in fig. 4 (a)) and B (x, y) (shown in fig. 4 (B)) are obtained, and the two images are added according to the method of S3 to obtain the fusion result, as shown in fig. 5.
The above simple example is mainly illustrated and shown by taking the whole image as an example, the actual calculation is realized by performing corresponding operations such as local convolution filtering, addition, subtraction, multiplication, division and the like on all pixels of the whole image, and the actual numerical values and results are also experimental results directly valued in program operation. By such a simple example, the whole process of the infrared image and visible light image fusion method based on the early visual information processing mechanism is explained.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (6)

1. The method for fusing the infrared image and the visible light image based on the early visual information processing is characterized by comprising the following steps of:
s1, dynamic receptive field processing of On-center neurons and Off-center neurons: carrying out convolution filtering processing on the input infrared image and the input visible light image by using a dynamic receptive field model;
s2, obtaining a fused image A and a fused image B: fusing the On central neuron response of the visible light image, the Off central neuron response of the infrared image and the original visible light image according to the contrast level of the visible light image to obtain a fused image A; fusing the On central neuron response of the infrared image, the Off central neuron response of the visible light image and the original infrared image according to the contrast level of the infrared image to obtain a fused image B:
and S3, adding the fused images A and B to obtain the final fusion result of the infrared image and the visible light image.
2. The method for fusing an infrared image and a visible light image based on early visual information processing according to claim 1, wherein the step S1 is implemented by: the method comprises the following steps of performing convolution filtering processing on an input infrared image and an input visible light image by using a dynamic receptive field model, and specifically comprises the following steps:
Figure FDA0002530267900000011
Figure FDA0002530267900000012
Figure FDA0002530267900000013
Figure FDA0002530267900000014
wherein ,
Figure FDA0002530267900000015
and
Figure FDA0002530267900000016
respectively represent the response of the visible light image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron,
Figure FDA0002530267900000017
and
Figure FDA0002530267900000018
respectively representing the response of the infrared image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron, wherein the superscript symbols V and IR respectively represent a visible light image and an infrared image, and x and y represent the coordinates of a pixel; CRF (x, y) and SRF (x, y) represent dynamic receptive fieldsThe response of the heart receptive field and the response of the peripheral receptive field, b (x, y) represents the weight of the peripheral receptive field, and the symbol max represents that the response results after the dynamic receptive field of the On-center neuron and the dynamic receptive field of the Off-center neuron are greater than or equal to 0;
the central receptive field response CRF (x, y) is calculated by the following equation:
CRF(x,y)=I(x,y)*DRF(x,y;d) (5)
in formula (5), I (x, y) represents an input visible light image or infrared image, and DRF (x, y; d) represents a dynamic gaussian filter:
Figure FDA0002530267900000021
d in the formula (6) represents the dimension of the dynamic Gaussian filter, and the value range of d is { sigma, lambda sigma }; the parameter σ represents the standard deviation of the gaussian function, and takes all real numbers in a range of [0.5, + ∞); λ represents the maximum dimension that can be achieved by controlling the dynamic gaussian filter, and all real numbers with a numeric range of [1, + ∞ ];
the dimension d of the dynamic gaussian filter is linked to the local contrast of the image:
d∝ΔI-1(x,y;σ) (7)
equation (7) shows that the dimension d of the dynamic Gaussian filter is inversely proportional to the local contrast Δ I (x, y; σ) of the image;
the response SRF (x, y) of the peripheral receptive field is calculated specifically from the following equation:
SRF(x,y)=I(x,y)*DRF(x,y;3σ) (8)
the weight b (x, y) of the peripheral receptive field is related to the local contrast of the image:
b(x,y)∝ΔI-1(x,y;3σ) (9)
equation (9) indicates that the weight b (x, y) of the peripheral receptive field is inversely proportional to the local contrast Δ I (x, y; 3 σ) of the image.
3. The method of claim 2, wherein the local contrasts Δ I (x, y; σ) and Δ I (x, y; 3 σ) are calculated by selecting local regions with region sizes σ and 3 σ around each pixel in the image I (x, y), and the local standard deviation calculated with the pixel as the center is used as the local contrast of the pixel.
4. The method for fusing an infrared image and a visible light image based on early visual information processing according to claim 1, wherein the step S2 is implemented by: simulating cortical and subcortical visual information fusion mechanism of biological visual system, and responding On central neuron of visible light image
Figure FDA0002530267900000022
Off-centric neuronal response of infrared images
Figure FDA0002530267900000023
And the original visible light image IVAnd (x, y) fusing the three components according to the contrast level of the visible light image to obtain a fused image A:
Figure FDA0002530267900000024
similarly, the On-centered neuron response of the infrared image
Figure FDA0002530267900000025
Off-centered neuronal response of visible light images
Figure FDA0002530267900000026
And an original infrared image IIRAnd (x, y) fusing the three components according to the contrast level of the infrared image to obtain a fused image B:
Figure FDA0002530267900000031
wherein the weight parameter β is calculated according to the following formula:
Figure FDA0002530267900000032
Figure FDA0002530267900000033
representing the average, parameter, of the local contrast Δ I (x, y; σ)
Figure FDA0002530267900000034
For controlling the slope of equation (12).
5. The method for fusing an infrared image and a visible light image based on early visual information processing as claimed in claim 4, wherein the parameters in step S2
Figure FDA0002530267900000035
Is any real number within the range of [0.5, + ∞).
6. The method for fusing an infrared image and a visible light image based on early visual information processing according to claim 4, wherein the image fusion in step S3 is implemented by: images a and B are added according to the following formula: (A (x, y) + B (x, y))/2.
CN202010516394.1A 2020-06-09 2020-06-09 Infrared image and visible light image fusion method based on early visual information processing Active CN111724333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010516394.1A CN111724333B (en) 2020-06-09 2020-06-09 Infrared image and visible light image fusion method based on early visual information processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010516394.1A CN111724333B (en) 2020-06-09 2020-06-09 Infrared image and visible light image fusion method based on early visual information processing

Publications (2)

Publication Number Publication Date
CN111724333A true CN111724333A (en) 2020-09-29
CN111724333B CN111724333B (en) 2023-05-30

Family

ID=72566263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010516394.1A Active CN111724333B (en) 2020-06-09 2020-06-09 Infrared image and visible light image fusion method based on early visual information processing

Country Status (1)

Country Link
CN (1) CN111724333B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409232A (en) * 2021-06-16 2021-09-17 吉林大学 Bionic false color image fusion model and method based on sidewinder visual imaging

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985115A (en) * 2014-04-01 2014-08-13 杭州电子科技大学 Image multi-strength edge detection method having visual photosensitive layer simulation function
WO2015157058A1 (en) * 2014-04-07 2015-10-15 Bae Systems Information & Electronic Systems Integration Inc. Contrast based image fusion
CN110120028A (en) * 2018-11-13 2019-08-13 中国科学院深圳先进技术研究院 A kind of bionical rattle snake is infrared and twilight image Color Fusion and device
CN110427823A (en) * 2019-06-28 2019-11-08 北京大学 Joint objective detection method and device based on video frame and pulse array signals
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985115A (en) * 2014-04-01 2014-08-13 杭州电子科技大学 Image multi-strength edge detection method having visual photosensitive layer simulation function
WO2015157058A1 (en) * 2014-04-07 2015-10-15 Bae Systems Information & Electronic Systems Integration Inc. Contrast based image fusion
CN110120028A (en) * 2018-11-13 2019-08-13 中国科学院深圳先进技术研究院 A kind of bionical rattle snake is infrared and twilight image Color Fusion and device
CN110427823A (en) * 2019-06-28 2019-11-08 北京大学 Joint objective detection method and device based on video frame and pulse array signals
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MIN-JIE TAN 等: "Visible-Infrared Image Fusion Based on Early Visual Information Processing Mechanisms" *
ZHEN ZHANG 等: "Bionic Algorithm for Color Fusion of Infrared and Low Light Level Image Based on Rattlesnake Bimodal Cells" *
倪国强 等: "基于响尾蛇双模式细胞机理的可见光/红外图像彩色融合技术的优势和前景展望" *
罗佳骏 等: "基于视觉感光层功能的菌落图像多强度边缘检测研究" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409232A (en) * 2021-06-16 2021-09-17 吉林大学 Bionic false color image fusion model and method based on sidewinder visual imaging

Also Published As

Publication number Publication date
CN111724333B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
RU2709661C1 (en) Training neural networks for image processing using synthetic photorealistic containing image signs
Raju et al. A fast and efficient color image enhancement method based on fuzzy-logic and histogram
Padmavathi et al. Implementation Of RGB And Grayscale Images In Plant Leaves Disease Detection –Comparative Study
Rijal et al. Ensemble of deep neural networks for estimating particulate matter from images
CN106952246A (en) The visible ray infrared image enhancement Color Fusion of view-based access control model attention characteristic
Gu et al. Research on the improvement of image edge detection algorithm based on artificial neural network
CN112580661B (en) Multi-scale edge detection method under deep supervision
CN110246154B (en) Visual target tracking method based on ICA-R multi-feature fusion and self-adaptive updating
Wang et al. Automatic illumination planning for robot vision inspection system
CN109523474A (en) A kind of enhancement method of low-illumination image based on greasy weather degradation model
CN109598200B (en) Intelligent image identification system and method for molten iron tank number
CN109508640A (en) A kind of crowd's sentiment analysis method, apparatus and storage medium
CN111724333B (en) Infrared image and visible light image fusion method based on early visual information processing
Drass et al. Semantic segmentation with deep learning: detection of cracks at the cut edge of glass
Chen et al. Image processing and understanding based on the fuzzy inference approach
Zhuang et al. Image enhancement by deep learning network based on derived image and Retinex
Singh et al. Multiscale reflection component based weakly illuminated nighttime image enhancement
Ebner On the evolution of edge detectors for robot vision using genetic programming
Boyun The principles of organizing the search for an object in an image, tracking an object and the selection of informative features based on the visual perception of a person
WO2022044673A1 (en) Image processing device, inspection system, and inspection method
Gajpal et al. Edge detection technique using hybrid fuzzy logic method
Sat et al. Object detection and recognition system for pick and place robot
Cao Optimization of Plane Image Color Enhancement Based on Computer Vision
Prystavka et al. Devising Information Technology for Determining the Redundant Information Content of a Digital Image
Luta et al. Image preprocessing using quick color averaging approach for color machine vision (CMV) systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant