CN111724333B - Infrared image and visible light image fusion method based on early visual information processing - Google Patents

Infrared image and visible light image fusion method based on early visual information processing Download PDF

Info

Publication number
CN111724333B
CN111724333B CN202010516394.1A CN202010516394A CN111724333B CN 111724333 B CN111724333 B CN 111724333B CN 202010516394 A CN202010516394 A CN 202010516394A CN 111724333 B CN111724333 B CN 111724333B
Authority
CN
China
Prior art keywords
image
visible light
infrared image
receptive field
light image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010516394.1A
Other languages
Chinese (zh)
Other versions
CN111724333A (en
Inventor
高绍兵
谭敏洁
魏伟
彭舰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202010516394.1A priority Critical patent/CN111724333B/en
Publication of CN111724333A publication Critical patent/CN111724333A/en
Application granted granted Critical
Publication of CN111724333B publication Critical patent/CN111724333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an infrared image and visible light image fusion method based on early visual information processing, which comprises the following steps: s1, dynamic receptive field treatment of On central neurons and Off central neurons; s2, obtaining a fusion image A and a fusion image B: the On central type neuron response of the visible light image, the Off central type neuron response of the infrared image and the original visible light image are fused to obtain an image A; the On central type neuron response of the infrared image, the Off central type neuron response of the visible light image and three components of the original infrared image are fused to obtain an image B: and S3, adding the fused images A and B to obtain a final fusion result. The invention can effectively combine the remarkable target information in the infrared image and the background information in the visible light image, and provides more effective characteristics for the computer vision task under night vision conditions such as subsequent high-value target detection and recognition.

Description

Infrared image and visible light image fusion method based on early visual information processing
Technical Field
The invention belongs to the technical field of computer vision and image processing, relates to fusion of an infrared image and a visible light image, and particularly relates to an infrared image and visible light image fusion method based on early vision information processing.
Background
The fusion of infrared images and visible light images is an important image enhancement technology, and features of different wave bands are fused together to obtain an image with more abundant information. The visible image always shows the background in very detail, while the infrared image clearly shows a target that may be camouflaged for visual detection and identification. The fusion of the visible and infrared images may display more information than a single image. The problem of image fusion has evolved over several decades under different schemes, with representative technical routes including methods based on multi-scale feature decomposition, methods based on sparse representation, methods based on salient feature extraction, and other advanced fusion methods. In recent years, with the rise of deep learning, there are methods for applying a neural network to image fusion, and more reliable results are obtained in most cases.
For most fusion methods, most of them are inspired by non-biological visual mechanisms. The Waxman et al then simulate the color antagonism and spatial antagonism information processing mechanism of the biological vision system to fuse the two gray source images into one color image. Reference is made to: the image fused by the method of Waxman AM, gove A N, fay D A, et al color light vision: opponent processing in the fusion of visible and IR imagery [ J ]. Neurol Networks,1997,10 (1): 1-6., however, the image fused by the method of Waxman et al loses information of a part of the source image and is not natural enough in image color appearance, and it is difficult to give a satisfactory result in visual effect.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the infrared image and visible light image fusion method based on early visual information processing, which can effectively fuse the obvious target information in the infrared image and the background information in the visible light image to obtain a fusion image with more abundant information and provide more effective characteristics for the computer visual tasks under night vision conditions such as follow-up high-value target detection and recognition.
The aim of the invention is realized by the following technical scheme: the method for fusing the infrared image and the visible light image based on early visual information processing comprises the following steps:
dynamic receptive field treatment of S1, on-center neurons and Off-center neurons: respectively carrying out convolution filtering processing on an input infrared image and a visible light image by using a dynamic receptive field model;
s2, obtaining a fusion image A and a fusion image B: fusing the On central type neuron response of the visible light image, the Off central type neuron response of the infrared image and the original visible light image according to the contrast level of the visible light image to obtain a fused image A; fusing the On central type neuron response of the infrared image, the Off central type neuron response of the visible light image and three components of the original infrared image according to the contrast level of the infrared image to obtain a fused image B:
and S3, adding the fused images A and B to obtain a final fusion result of the infrared image and the visible light image.
Further, the specific implementation method of the step S1 is as follows: the method comprises the steps of respectively carrying out convolution filtering processing on an input infrared image and a visible light image by using a dynamic receptive field model, wherein the specific method comprises the following steps:
Figure BDA0002530267910000021
Figure BDA0002530267910000022
Figure BDA0002530267910000023
Figure BDA0002530267910000024
wherein ,
Figure BDA0002530267910000025
and />
Figure BDA0002530267910000026
Representing the response of the visible light image after dynamic receptive field treatment of On-center neurons and Off-center neurons, respectively,/>
Figure BDA0002530267910000027
and />
Figure BDA0002530267910000028
The responses of the infrared image after the dynamic receptive field treatment of the On center type neuron and the dynamic receptive field treatment of the Off center type neuron are represented respectively, the superscript symbol V and the IR represent the visible light image and the infrared image respectively, and x and y represent the coordinates of the pixels; CRF (x, y) and SRF (x, y) represent the response of the dynamic receptive field center receptive field and the response of the peripheral receptive field, b (x, y) represents the weight of the peripheral receptive field, and symbol max represents that the response result after the dynamic receptive field treatment of the On-center type neurons and the Off-center type neurons takes a value of greater than or equal to 0;
the response CRF (x, y) of the central receptive field is calculated from the following formula:
CRF(x,y)=I(x,y)*DRF(x,y;d) (5)
in equation (5), I (x, y) represents an input visible or infrared image, and DRF (x, y; d) represents a dynamic gaussian filter:
Figure BDA0002530267910000029
d in the formula (6) represents the dimension of the dynamic Gaussian filter, and the value range of d is { sigma, lambda sigma }; the parameter sigma represents the standard deviation of the Gaussian function, and the value range is all real numbers of [0.5, + ]; λ represents the maximum dimension that can be achieved by controlling the dynamic gaussian filter, and the range of values is all real numbers of [1, + -infinity);
the dimension d of the dynamic gaussian filter is related to the local contrast of the image:
d∝ΔI -1 (x,y;σ) (7)
equation (7) shows that the dimension d of the dynamic gaussian filter is inversely proportional to the local contrast Δi (x, y; σ) of the image;
the response SRF (x, y) of the peripheral receptive field is specifically calculated from the following formula:
SRF(x,y)=I(x,y)*DRF(x,y;3σ) (8)
the weight b (x, y) of the peripheral receptive field is related to the local contrast of the image:
b(x,y)∝ΔI -1 (x,y;3σ) (9)
equation (9) shows that the weight b (x, y) of the peripheral receptive field is inversely proportional to the local contrast Δi (x, y;3σ) of the image.
Further, the specific calculation method of the local contrasts Δi (x, y; σ) and Δi (x, y;3σ) is to select, in the image I (x, y), local areas with area sizes of σ and 3σ, respectively, around each pixel, and take the local standard deviation calculated by taking the pixel as the center as the local contrast of the pixel.
Further, the specific implementation method of the step S2 is as follows: simulating the cortical and subcortical visual information fusion mechanism of the biological visual system, and responding the On center type neuron of the visible light image
Figure BDA0002530267910000031
Off-center neuronal response of infrared image +.>
Figure BDA0002530267910000032
Original visible light image I V And (x, y) fusing the three components according to the contrast level of the visible light image to obtain a fused image A:
Figure BDA0002530267910000033
/>
similarly, the On-center neuronal response of the infrared image
Figure BDA0002530267910000034
Off-center neuronal response of visible light image +.>
Figure BDA0002530267910000035
Original infrared image I IR (x, y) three components,fusing according to the contrast level of the infrared image to obtain a fused image B:
Figure BDA0002530267910000036
wherein the weight parameter beta is calculated according to the following formula:
Figure BDA0002530267910000037
Figure BDA0002530267910000038
mean value of local contrast ΔI (x, y; sigma), parameter +.>
Figure BDA0002530267910000039
For controlling the slope of equation (12).
Further, the parameters in the step S2
Figure BDA00025302679100000310
The value range of (2) is 0.5, + -infinity) of any real number within.
Further, the specific implementation method for performing image fusion in the step S3 is as follows: images a and B are added according to the following formula: (A (x, y) +B (x, y))/2.
The beneficial effects of the invention are as follows: the method can realize the online real-time fusion and parameter self-adaption of the infrared image and the visible light image, and can effectively fuse the obvious target information in the infrared image and the background information in the visible light image to obtain a fusion image with richer information. As a multisource information fusion method, the invention can be embedded in equipment such as a night vision device and the like, is applied to effectively detecting and acquiring high-value targets in a night vision environment, and provides more effective characteristics for computer vision tasks under the night vision conditions such as subsequent high-value target detection and identification.
Drawings
FIG. 1 is a flow chart of an image fusion method of the present invention;
FIG. 2 is an original visible image and an infrared image used in the present embodiment;
FIG. 3 is a graph showing the results of the present embodiment after processing the original visible and infrared images using a dynamic receptive field;
FIG. 4 is a fused image A and a fused image B;
fig. 5 is a fusion image finally obtained in this example.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the method for fusing an infrared image and a visible light image based on early visual information processing of the present invention comprises the following steps:
dynamic receptive field treatment of S1, on-center neurons and Off-center neurons: respectively carrying out convolution filtering processing on an input infrared image and a visible light image by using a dynamic receptive field model; the specific implementation method comprises the following steps: the method comprises the steps of respectively carrying out convolution filtering processing on an input infrared image and a visible light image by using a dynamic receptive field model, wherein the specific method comprises the following steps:
Figure BDA0002530267910000041
Figure BDA0002530267910000042
Figure BDA0002530267910000043
Figure BDA0002530267910000044
/>
wherein ,
Figure BDA0002530267910000045
and />
Figure BDA0002530267910000046
Representing the response of visible light images after dynamic receptive field treatment of On-center neurons and dynamic receptive field treatment of Off-center neurons, respectively,/->
Figure BDA0002530267910000047
and />
Figure BDA0002530267910000048
The responses of the infrared image after the dynamic receptive field treatment of the On center type neuron and the dynamic receptive field treatment of the Off center type neuron are represented respectively, the superscript symbol V and the IR represent the visible light image and the infrared image respectively, and x and y represent the coordinates of the pixels; CRF (x, y) and SRF (x, y) represent the response of the dynamic receptive field center receptive field and the response of the peripheral receptive field, b (x, y) represents the weight of the peripheral receptive field, and symbol max represents that the response result after the dynamic receptive field treatment of the On-center type neurons and the Off-center type neurons takes a value of greater than or equal to 0;
the response CRF (x, y) of the central receptive field is calculated from the following formula:
CRF(x,y)=I(x,y)*DRF(x,y;d) (5)
in equation (5), I (x, y) represents an input visible or infrared image, and DRF (x, y; d) represents a dynamic gaussian filter:
Figure BDA0002530267910000051
d in the formula (6) represents the dimension of the dynamic Gaussian filter, and the value range of d is { sigma, lambda sigma }; the parameter sigma represents the standard deviation of the Gaussian function, and the value range is all real numbers of [0.5, + ]; λ represents the maximum dimension that can be achieved by controlling the dynamic gaussian filter, and the range of values is all real numbers of [1, + -infinity);
the dimension d of the dynamic gaussian filter is related to the local contrast of the image:
d∝ΔI -1 (x,y;σ) (7)
equation (7) shows that the dimension d of the dynamic gaussian filter is inversely proportional to the local contrast Δi (x, y; σ) of the image; examples of which are as follows: the local contrast is normalized to the range of [0,1] and divided into N layers, and the value range of N is all integers of [2,64 ]. Taking n=3 as an example, the normalized local contrast is divided into [0,13], [13,23], [23,1]3 levels. Since d e { σ, λσ }, here taking λ=3 as an example, the value of d is d=3σ in the local contrast [0,13] range, d=2σ in the local contrast [13,23] range, d=σ in the local contrast [23,1] range, i.e. the value of d is inversely proportional to the local contrast of the image, the smaller the local contrast, the larger the value of d, and vice versa.
The response SRF (x, y) of the peripheral receptive field is specifically calculated from the following formula:
SRF(x,y)=I(x,y)*DRF(x,y;3σ) (8)
the weight b (x, y) of the peripheral receptive field is related to the local contrast of the image:
b(x,y)∝ΔI -1 (x,y;3σ) (9)
equation (9) shows that the weight b (x, y) of the peripheral receptive field is inversely proportional to the local contrast Δi (x, y;3σ) of the image, and the value range of b (x, y) is [ -1,0]; examples of which are as follows: the local contrast is normalized to the range of [0,1] and divided into N layers, and the value range of N is all integers of [2,64 ]. Taking n=3 as an example, the normalized local contrast is divided into three layers of [0,1/3], [1/3,2/3], [2/3,1 ]. Since b (x, y) ∈ [ -1,0], the value of b (x, y) is b (x, y) = -1/3 in the local contrast [0,13], b (x, y) = -2/3 in the local contrast [1/3,2/3], and b (x, y) = -1 in the local contrast [2/3,1], that is, the value of bx (, y) is inversely proportional to the local contrast of the image, the larger the local contrast, the smaller the value of b (x, y), and vice versa.
The specific calculation method of the local contrasts delta I (x, y; sigma) and delta I (x, y;3 sigma) is that local areas with the area sizes sigma and 3 sigma are selected around each pixel in the image I (x, y), and the local standard deviation calculated by taking the pixel as the center is used as the local contrast of the pixel.
S2, obtaining a fusion image A and a fusion image B: fusing the On central type neuron response of the visible light image, the Off central type neuron response of the infrared image and the original visible light image according to the contrast level of the visible light image to obtain a fused image A; fusing the On central type neuron response of the infrared image, the Off central type neuron response of the visible light image and three components of the original infrared image according to the contrast level of the infrared image to obtain a fused image B:
the specific implementation method comprises the following steps: simulating the cortical and subcortical visual information fusion mechanism of the biological visual system, and responding the On center type neuron of the visible light image
Figure BDA0002530267910000061
Off-center neuronal response of infrared image +.>
Figure BDA0002530267910000062
Original visible light image I V And (x, y) fusing the three components according to the contrast level of the visible light image to obtain a fused image A:
Figure BDA0002530267910000063
similarly, the On-center neuronal response of the infrared image
Figure BDA0002530267910000064
Off-center neuronal response of visible light image +.>
Figure BDA0002530267910000065
Original infrared image I IR And (x, y) fusing the three components according to the contrast level of the infrared image to obtain a fused image B:
Figure BDA0002530267910000066
wherein the weight parameter beta is calculated according to the following formula:
Figure BDA0002530267910000067
Figure BDA0002530267910000068
mean value of local contrast ΔI (x, y; sigma), parameter +.>
Figure BDA0002530267910000069
For controlling the slope, parameter of equation (12)
Figure BDA00025302679100000610
The value range of (2) is 0.5, + -infinity) of any real number within.
S3, adding the fused images A and B to obtain a final fusion result of the infrared image and the visible light image; the specific implementation method comprises the following steps: images a and B are added according to the following formula: (A (x, y) +B (x, y))/2.
The present embodiment uses an infrared image and a visible light image with an image size of 270 x 360 (as shown in fig. 2, the image is derived from a TNO image dataset, the image is named "image", (a) and (b) are respectively a visible light image and an infrared image), the parameters in the present embodiment are set as follows σ=0.5, λ=3,
Figure BDA0002530267910000071
n=3. Results after the dynamic receptive field processing of a pair of visible light images and infrared images named "Vegetation" using On-center type neurons and Off-center type neurons through step S1 (formulas (1) to (4)): />
Figure BDA0002530267910000072
As shown in the images (a), (B), (c) and (d) of FIG. 3, respectively, and then images A and B are calculated according to formulas (10) to (11), resulting in results A (x, y) (as shown in FIG. 4 (a)) andb (x, y) (shown in fig. 4 (B)), and the two images are added according to the method of S3, as shown in fig. 5.
The simple examples above are mainly described and shown by taking the whole image as an example, and the actual calculation is realized by performing corresponding operations such as local convolution filtering, addition, subtraction, multiplication and division on all pixels of the whole image, and the actual numerical values and results are also experimental results of direct values in program operation. By way of such a simple example, the overall process of an infrared image and visible image fusion method based on an early visual information processing mechanism is illustrated.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.

Claims (5)

1. The method for fusing the infrared image and the visible light image based on early visual information processing is characterized by comprising the following steps of:
dynamic receptive field treatment of S1, on-center neurons and Off-center neurons: respectively carrying out convolution filtering processing on an input infrared image and a visible light image by using a dynamic receptive field model; the specific implementation method comprises the following steps: the method comprises the steps of respectively carrying out convolution filtering processing on an input infrared image and a visible light image by using a dynamic receptive field model, wherein the specific method comprises the following steps:
Figure FDA0004127994830000011
Figure FDA0004127994830000012
Figure FDA0004127994830000013
Figure FDA0004127994830000014
wherein ,
Figure FDA0004127994830000015
and />
Figure FDA0004127994830000016
Representing the response of visible light images after dynamic receptive field treatment of On-center neurons and dynamic receptive field treatment of Off-center neurons, respectively,/->
Figure FDA0004127994830000017
and />
Figure FDA0004127994830000018
The responses of the infrared image after the dynamic receptive field treatment of the On center type neuron and the dynamic receptive field treatment of the Off center type neuron are represented respectively, the superscript symbol V and the IR represent the visible light image and the infrared image respectively, and x and y represent the coordinates of the pixels; CRF (x, y) and SRF (x, y) represent the response of the dynamic receptive field center receptive field and the response of the peripheral receptive field, b (x, y) represents the weight of the peripheral receptive field, and symbol max represents that the response result after the dynamic receptive field treatment of the On-center type neurons and the Off-center type neurons takes a value of greater than or equal to 0;
the response CRF (x, y) of the central receptive field is calculated from the following formula:
CRF(x,y)=I(x,y)*DRF(x,y;d) (5)
in equation (5), I (x, y) represents an input visible or infrared image, and DRF (x, y; d) represents a dynamic gaussian filter:
Figure FDA0004127994830000019
d in the formula (6) represents the dimension of the dynamic Gaussian filter, and the value range of d is { sigma, lambda sigma }; the parameter sigma represents the standard deviation of the Gaussian function, and the value range is all real numbers of [0.5, + ]; λ represents the maximum dimension that can be achieved by controlling the dynamic gaussian filter, and the range of values is all real numbers of [1, + -infinity);
the dimension d of the dynamic gaussian filter is related to the local contrast of the image:
d∝ΔI -1 (x, y; sigma) (7) equation (7) shows that the dimension d of the dynamic gaussian filter is inversely proportional to the local contrast Δi (x, y; sigma) of the image;
the response SRF (x, y) of the peripheral receptive field is specifically calculated from the following formula:
SRF(x,y)=I(x,y)*DRF(x,y;3σ) (8)
the weight b (x, y) of the peripheral receptive field is related to the local contrast of the image:
b(x,y)∝ΔI -1 (x, y;3σ) (9) equation (9) represents that the weight b (x, y) of the peripheral receptive field is inversely proportional to the local contrast Δi (x, y;3σ) of the image;
s2, obtaining a fusion image A and a fusion image B: fusing the On central type neuron response of the visible light image, the Off central type neuron response of the infrared image and the original visible light image according to the contrast level of the visible light image to obtain a fused image A; fusing the On central type neuron response of the infrared image, the Off central type neuron response of the visible light image and three components of the original infrared image according to the contrast level of the infrared image to obtain a fused image B:
and S3, adding the fused images A and B to obtain a final fusion result of the infrared image and the visible light image.
2. The method for fusing an infrared image and a visible light image based on early visual information processing according to claim 1, wherein the specific calculation method of the local contrasts Δi (x, y; σ) and Δi (x, y;3σ) is to select, in the image I (x, y), local areas having the area sizes of σ and 3σ, respectively, around each pixel, and the local standard deviation calculated with the pixel as the center is taken as the local contrast of the pixel.
3. The method for fusing an infrared image and a visible light image based on early visual information processing according to claim 1, wherein the specific implementation method of step S2 is as follows: simulating the cortical and subcortical visual information fusion mechanism of the biological visual system, and responding the On center type neuron of the visible light image
Figure FDA0004127994830000021
Off-center neuronal response of infrared image +.>
Figure FDA0004127994830000022
Original visible light image I V And (x, y) fusing the three components according to the contrast level of the visible light image to obtain a fused image A:
Figure FDA0004127994830000023
similarly, the On-center neuronal response of the infrared image
Figure FDA0004127994830000024
Off-center neuronal response of visible light image +.>
Figure FDA0004127994830000025
Original infrared image I IR And (x, y) fusing the three components according to the contrast level of the infrared image to obtain a fused image B:
Figure FDA0004127994830000031
wherein the weight parameter beta is calculated according to the following formula:
Figure FDA0004127994830000032
Figure FDA0004127994830000033
mean value of local contrast ΔI (x, y; sigma), parameter +.>
Figure FDA0004127994830000034
For controlling the slope of equation (12).
4. The method for fusing infrared image and visible light image based on early stage visual information processing as claimed in claim 3, wherein said parameters in step S2 are as follows
Figure FDA0004127994830000035
The value range of (2) is 0.5, + -infinity) of any real number within.
5. The method for fusing infrared image and visible light image based on early visual information processing according to claim 3, wherein the specific implementation method for fusing the image in step S3 is as follows: images a and B are added according to the following formula: (A (x, y) +B (x, y))/2.
CN202010516394.1A 2020-06-09 2020-06-09 Infrared image and visible light image fusion method based on early visual information processing Active CN111724333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010516394.1A CN111724333B (en) 2020-06-09 2020-06-09 Infrared image and visible light image fusion method based on early visual information processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010516394.1A CN111724333B (en) 2020-06-09 2020-06-09 Infrared image and visible light image fusion method based on early visual information processing

Publications (2)

Publication Number Publication Date
CN111724333A CN111724333A (en) 2020-09-29
CN111724333B true CN111724333B (en) 2023-05-30

Family

ID=72566263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010516394.1A Active CN111724333B (en) 2020-06-09 2020-06-09 Infrared image and visible light image fusion method based on early visual information processing

Country Status (1)

Country Link
CN (1) CN111724333B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409232B (en) * 2021-06-16 2023-11-10 吉林大学 Bionic false color image fusion model and method based on croaker visual imaging

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985115A (en) * 2014-04-01 2014-08-13 杭州电子科技大学 Image multi-strength edge detection method having visual photosensitive layer simulation function
WO2015157058A1 (en) * 2014-04-07 2015-10-15 Bae Systems Information & Electronic Systems Integration Inc. Contrast based image fusion
CN110120028A (en) * 2018-11-13 2019-08-13 中国科学院深圳先进技术研究院 A kind of bionical rattle snake is infrared and twilight image Color Fusion and device
CN110427823A (en) * 2019-06-28 2019-11-08 北京大学 Joint objective detection method and device based on video frame and pulse array signals
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985115A (en) * 2014-04-01 2014-08-13 杭州电子科技大学 Image multi-strength edge detection method having visual photosensitive layer simulation function
WO2015157058A1 (en) * 2014-04-07 2015-10-15 Bae Systems Information & Electronic Systems Integration Inc. Contrast based image fusion
CN110120028A (en) * 2018-11-13 2019-08-13 中国科学院深圳先进技术研究院 A kind of bionical rattle snake is infrared and twilight image Color Fusion and device
CN110427823A (en) * 2019-06-28 2019-11-08 北京大学 Joint objective detection method and device based on video frame and pulse array signals
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Min-Jie Tan 等.Visible-Infrared Image Fusion Based on Early Visual Information Processing Mechanisms.IEEE Transactions on Circuits and Systems for Video Technology .2020,第31卷(第11期),4357-4369. *
ZHEN ZHANG 等.Bionic Algorithm for Color Fusion of Infrared and Low Light Level Image Based on Rattlesnake Bimodal Cells.IEEE Access.2018,第6卷68981-68988. *
倪国强 等.基于响尾蛇双模式细胞机理的可见光/红外图像彩色融合技术的优势和前景展望.北京理工大学学报.2004,(第02期),95-100. *
罗佳骏 等.基于视觉感光层功能的菌落图像多强度边缘检测研究.中国生物医学工程学报.2014,第33卷(第06期),677-686. *

Also Published As

Publication number Publication date
CN111724333A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN109272455B (en) Image defogging method based on weak supervision generation countermeasure network
CN106952246A (en) The visible ray infrared image enhancement Color Fusion of view-based access control model attention characteristic
CN106650690A (en) Night vision image scene identification method based on deep convolution-deconvolution neural network
CN106504064A (en) Clothes classification based on depth convolutional neural networks recommends method and system with collocation
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN110097522B (en) Single outdoor image defogging method based on multi-scale convolution neural network
CN106897673A (en) A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks
CN112580661B (en) Multi-scale edge detection method under deep supervision
CN109615614B (en) Method for extracting blood vessels in fundus image based on multi-feature fusion and electronic equipment
CN110969171A (en) Image classification model, method and application based on improved convolutional neural network
CN111724333B (en) Infrared image and visible light image fusion method based on early visual information processing
Lin et al. Using a hybrid of fuzzy theory and neural network filter for single image dehazing
CN110555820A (en) Image fusion method based on convolutional neural network and dynamic guide filtering
CN113129390B (en) Color blindness image re-coloring method and system based on joint significance
Guo et al. Multi-focus image fusion based on fully convolutional networks
Chen et al. Image processing and understanding based on the fuzzy inference approach
Zhuang et al. Image enhancement by deep learning network based on derived image and retinex
Hirsch et al. Color visual illusions: A statistics-based computational model
Goncalves et al. Guidednet: Single image dehazing using an end-to-end convolutional neural network
Liu et al. Learning an optical filter for green pepper automatic picking in agriculture
Hayakawa et al. Feature extraction of video using deep neural network
Lecca et al. Performance comparison of image enhancers with and without deep learning
Zhang et al. A generative adversarial network with dual discriminators for infrared and visible image fusion based on saliency detection
Prystavka et al. Devising Information Technology for Determining the Redundant Information Content of a Digital Image
CN112819077A (en) Hyperspectral image classification method based on novel activation function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant