CN113409232A - Bionic false color image fusion model and method based on sidewinder visual imaging - Google Patents

Bionic false color image fusion model and method based on sidewinder visual imaging Download PDF

Info

Publication number
CN113409232A
CN113409232A CN202110667804.7A CN202110667804A CN113409232A CN 113409232 A CN113409232 A CN 113409232A CN 202110667804 A CN202110667804 A CN 202110667804A CN 113409232 A CN113409232 A CN 113409232A
Authority
CN
China
Prior art keywords
image
infrared
visible light
fusion
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110667804.7A
Other languages
Chinese (zh)
Other versions
CN113409232B (en
Inventor
王勇
刘红旗
李新潮
谢文洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202110667804.7A priority Critical patent/CN113409232B/en
Publication of CN113409232A publication Critical patent/CN113409232A/en
Application granted granted Critical
Publication of CN113409232B publication Critical patent/CN113409232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a bionic false color image fusion model and a method based on sidewinder visual imaging, wherein the model carries out image preprocessing by extracting common information and specific information of an infrared source image and a visible light source image, so that the quality of a fusion image is improved; an image fusion structure is designed by introducing a double-mode cell mathematical model of the rattlesnake, so that a double-mode cell fusion mechanism of the rattlesnake is effectively utilized, and a visual perception mechanism of the rattlesnake is better simulated; the obtained fusion image has improved color expression, more obvious details and more prominent target, and is more in line with the visual characteristics of human eyes.

Description

Bionic false color image fusion model and method based on sidewinder visual imaging
Technical Field
The invention relates to the technical field of image fusion processing, in particular to a bionic false color image fusion model and method based on sidewinder visual imaging.
Background
The image fusion technology aims to integrate image information of a plurality of images with advantages and disadvantages obtained by a plurality of sensors in the same environment to generate a single fused image containing more information, and further acquire more accurate information from the single fused image. In order to further research the image fusion technology, some researchers use the sidewinder as a research object to simulate the visual imaging mechanism of the sidewinder, for example, a.m. waxman et al of the american academy of massachusetts proposes a fusion structure of low-light and infrared images by using a visual receptive field model simulating the dual-mode cell working principle of the sidewinder.
In the Waxman fusion structure, an ON/OFF structure shows the contrast perception attribute of a center-surrounding confrontation receiving domain, the first stage is an enhancement stage, and the second stage is the treatment of infrared enhancement visible light and infrared inhibition visible light, and is consistent with the fusion mechanism of infrared and visible light of the tail rattle vision. The Waxman fusion structure simulates an infrared enhanced visible light cell and an infrared inhibited visible light cell, although an OFF countermeasure and an ON countermeasure are respectively carried out ON an infrared signal and are transmitted into a surrounding area of a ganglion cell, the infrared signal is still an inhibition signal substantially, so that the enhancement of the infrared signal ON the visible light signal is not obvious, and further, the obtained fusion image is not ideal in color expression, not obvious in target and not outstanding in detail.
Therefore, how to provide a bionic false color image fusion method based on the visual imaging of the rattlesnake with better fusion effect is a problem which needs to be solved urgently by the technical personnel in the field.
Disclosure of Invention
In view of the above, the invention provides a bionic false color image fusion model and method based on sidewinder visual imaging, which solve the problems of the existing image fusion method that the color expression of the obtained fusion image is not ideal enough, the target is not obvious enough, the details are not outstanding enough, and the like.
In order to achieve the purpose, the invention adopts the following technical scheme:
in one aspect, the invention provides a bionic false color image fusion model based on sidewinder visual imaging, which comprises:
the image preprocessing module is used for extracting common information and specific information of the input infrared source image and the input visible light source image and preprocessing the infrared source image and the visible light source image;
the double-mode cell mechanism simulation module of the rattlesnake performs double-mode cell mechanism simulation on the preprocessed infrared source image and visible light source image through a double-mode cell mathematical model of the rattlesnake to obtain six output signals of the double-mode cell mechanism of the rattlesnake;
the enhanced image generation module is used for enhancing the output signals of the six types of double-mode cell models of the rattlesnake to obtain enhanced images;
the fusion signal generation module is used for carrying out fusion processing on the enhanced image to obtain a fusion signal; and
and the false color fusion image generation module is used for mapping the fusion signal to different color channels of an RGB color space to generate a false color fusion image.
Further, the image pre-processing module comprises:
a common information acquisition unit for acquiring common information components of the infrared source image and the visible light source image, that is:
Ir(i,j)∩Ivis(i,j)=min{Ir(i,j),Ivis(i,j)}
wherein, Ir(I, j) represents an infrared source image, Ivis(I, j) represents the visible light source image, (I, j) represents a certain pixel point corresponding to the two images, Ir(i,j)∩Ivis(i, j) represents a common information component of both;
a unique information acquisition unit for acquiring unique information components of the infrared source image and the visible light source image, that is:
Ir(i,j)*=Ir(i,j)-Ir(i,j)∩Ivis(i,j)
Ivis(i,j)*=Ivis(i,j)-Ir(i,j)∩Ivis(i,j)
wherein, Ir(i,j)*Representing an image of an infrared source Ir(i, j) characteristicsHaving an information component, Ivis(i,j)*Representing a visible light source image Ivis(ii) a unique information component of (i, j);
the preprocessing unit is used for subtracting the specific information component of the visible light source image from the infrared source image to obtain a preprocessing result of the infrared source image, and subtracting the specific information component of the infrared source image from the visible light source image to obtain a preprocessing result of the visible light source image.
Further, the double-mode cellular mathematical model of the sidewinder comprises a visible light enhanced infrared cellular mathematical model, a visible light inhibited infrared cellular mathematical model, an infrared enhanced visible light cellular mathematical model, an infrared inhibited visible light cellular mathematical model, a cellular mathematical model and/or a cellular mathematical model.
Further, the expression of the mathematical model of the visible light-enhanced infrared cell is as follows:
I+IR←V(i,j)=IIR(i,j)exp[IV(i,j)]
wherein, I+IR←V(I, j) denotes an image obtained after visible light-enhanced infrared, IIR(I, j) represents an infrared image, IV(i, j) represents a visible light image;
the expression of the mathematical model of the visible light inhibition infrared cell is as follows:
I-IR←V(i,j)=IIR(i,j)log[IV(i,j)+1]
wherein, I-IR←V(i, j) represents an image obtained after visible light inhibits infrared;
the expression of the infrared enhanced visible light cell mathematical model is as follows:
I+V←IR(i,j)=IV(i,j)exp[IIR(i,j)]
wherein, I+V←IR(i, j) represents an image obtained after infrared enhancement of the visible light signal;
the expression of the mathematical model of the infrared inhibition visible light cell is as follows:
I-V←IR(i,j)=IV(i,j)log[IIR(i,j)+1]
wherein, I-V←IR(i, j) represents an image obtained after infrared suppression of visible light signals;
the expression with the cell mathematical model is as follows:
when I isV(i,j)<IR(i, j), the fusion result is:
IAND(i,j)=mIV(i,j)+nIR(i,j)
when I isV(i,j)>IR(i, j), the fusion result is:
IAND(i,j)=nIV(i,j)+mIR(i,j)
wherein m is>0.5,n<0.5,IAND(i, j) represents an image obtained after the infrared image and the visible light image are weighted and acted;
the expression of the mathematical model of the cell is:
when I isV(i,j)<IR(i, j), the fusion result is:
IOR(i,j)=nIV(i,j)+mIR(i,j)
when I isV(i,j)>IR(i, j), the fusion result is:
IOR(i,j)=mIV(i,j)+nIR(i,j)
wherein m is>0.5,n<0.5,IORAnd (i, j) represents an image obtained by weighting or applying the visible light image and the infrared image.
Further, the six types of dual-mode cellular model output signals of the rattlesnake comprise an AND output signal, or an output signal, an infrared-enhanced visible light output signal, an infrared-suppressed visible light output signal, a visible-enhanced infrared output signal and a visible-suppressed infrared output signal.
Further, the enhanced image generation module includes:
an enhanced image + OR _ AND generating unit for feeding the OR output signal AND the AND output signal to a central excitation region AND a surround suppression region of an ON-center type receptive field, respectively, generating an enhanced image + OR _ AND;
an enhanced image + VIS generation unit for feeding the infrared enhanced visible light output signal and the infrared suppressed visible light output signal to a central excitation region and a surrounding suppression region of an ON-central receptive field, respectively, to generate an enhanced image + VIS; and
an enhanced image + IR generating unit for feeding the visible enhanced infrared output signal and the visible suppressed infrared signal to a central suppressed region and a surrounding excited region of the OFF-central receptive field, respectively, resulting in an enhanced image + IR.
Further, the fusion signal generation module includes:
an image feed-in unit for feeding the enhanced image + OR _ AND, the enhanced image + VIS AND the enhanced image + IR into the central AND surrounding regions corresponding to the two ON-center type receptive fields, respectively, obtaining a fusion signal + VlS + OR _ AND AND a fusion signal + VlS + IR; and
a linear OR operation unit for linearly OR-ing the enhanced image + VIS AND the enhanced image + OR _ AND generating a fusion signal + OR _ AND OR + VlS.
On the other hand, the invention also provides a bionic false color image fusion method based on the sidewinder visual imaging, which comprises the following steps:
acquiring an infrared source image and a visible light source image to be processed;
and inputting the acquired infrared source image and visible light source image into the bionic false color image fusion model based on the Crotalus viridis visual imaging, and outputting a false color fusion image.
According to the technical scheme, compared with the prior art, the invention discloses the bionic false color image fusion model and the method based on the sidewinder visual imaging, the model carries out image preprocessing by extracting the common information and the specific information of the infrared source image and the visible light source image, and the quality of the fusion image is improved; an image fusion structure is designed by introducing a double-mode cell mathematical model of the rattlesnake, so that a double-mode cell fusion mechanism of the rattlesnake is effectively utilized, and a visual perception mechanism of the rattlesnake is better simulated; the obtained fusion image has improved color expression, more obvious details and more prominent target, and is more in line with the visual characteristics of human eyes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a bionic false color image fusion model based on sidewinder visual imaging provided by the present invention;
FIG. 2 is a schematic diagram of an implementation of an image pre-processing module;
FIG. 3 is a schematic diagram of the structure of the ON-centric receptor field model and the OFF-centric receptor field model;
FIG. 4 is a schematic diagram of an implementation flow of a bionic false color image fusion method based on sidewinder visual imaging provided by the invention;
fig. 5 is a schematic diagram of an implementation principle of a bionic false color image fusion method based on the visual imaging of the rattlesnake.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In one aspect, referring to fig. 1, an embodiment of the present invention discloses a bionic false color image fusion model based on sidewinder visual imaging, which includes:
the image preprocessing module 1 is used for extracting common information and specific information of an input infrared source image and an input visible light source image and preprocessing the infrared source image and the visible light source image;
the double-mode cell mechanism simulation module 2 for the rattlesnake is used for simulating the double-mode cell mechanism of the rattlesnake on the preprocessed infrared source image and visible light source image through a double-mode cell mathematical model for the rattlesnake, so as to obtain six output signals of the double-mode cell mechanism simulation module for the rattlesnake;
the enhanced image generation module 3 is used for enhancing the output signals of the six types of double-mode cell models of the rattlesnake to obtain enhanced images;
the fusion signal generation module 4 is used for carrying out fusion processing on the enhanced image to obtain a fusion signal; and
and the false color fusion image generation module 5 is used for mapping the fusion signal to different color channels of the RGB color space to generate a false color fusion image.
Specifically, the image preprocessing module 1 includes:
the common information acquisition unit is used for acquiring common information components of the infrared source image and the visible light source image, namely:
Ir(i,j)∩Ivis(i,j)=min{Ir(i,j),Ivis(i,j)}
wherein, Ir(I, j) represents an infrared source image, Ivis(I, j) represents the visible light source image, (I, j) represents a certain pixel point corresponding to the two images, Ir(i,j)∩Ivis(i, j) represents a common information component of both;
the special information acquisition unit is used for acquiring special information components of the infrared source image and the visible light source image, namely:
Ir(i,j)*=Ir(i,j)-Ir(i,j)∩Ivis(i,j)
Ivis(i,j)*=Ivis(i,j)-Ir(i,j)∩Ivis(i,j)
wherein, Ir(i,j)*Representing an image of an infrared source Ir(I, j) a unique information component, Ivis(i,j)*Representing a visible light source image Ivis(ii) a unique information component of (i, j);
a preprocessing unit for preprocessing the infrared source image Ir(I, j) subtracting a unique information component I of the visible light source imagevis(i,j)*Obtaining the preprocessing result of the infrared source image, namely Ir(i,j)-Ivis(i,j)*And combining the visible light source image Ivis(I, j) subtracting the specific information component I of the infrared source imager(i,j)*Obtaining the preprocessing result of the visible light source image, i.e. Ivis(i,j)-Ir(i,j)*Is shown byr(i,j)-Ivis(i,j)*And Ivis(i,j)-Ir(i,j)*As the preprocessed infrared image and visible image, respectively, it is noted as IR and VIS, namely:
Figure BDA0003118013200000071
fig. 2 shows the principle that the units in the image preprocessing module acquire and preprocess the common and unique features of the infrared source image and the visible light source image to finally obtain the preprocessed infrared image IR and visible light image VIS.
The preprocessing operation in the embodiment is to process the source image input by image fusion according to the following requirements, retain or improve some image information, and omit some image information which is not important for the subsequent processing, so as to achieve the effect of enhancing the image, and further improve the quality of the finally obtained fusion image.
If the infrared image and the visible light image are fused into one image to be displayed, the image information of the two source images is necessarily selected and emphasized, the proportion of the image information common to the infrared source image and the visible light source image is reduced by subtracting the common information of the infrared source image and the visible light source image, the unique image information of the infrared source image but lacking in the visible light source image is more highlighted, and the original purpose of using the visible light source image to subtract the common information of the infrared source image and the visible light source image is also the same. The integration and the presentation of the infrared source image and the visible light source image information are facilitated by the fused image during the subsequent image fusion.
In this embodiment, the sidewinder bimodal cell mathematical model includes a visible light enhanced infrared cell mathematical model, a visible light suppressed infrared cell mathematical model, an infrared enhanced visible light cell mathematical model, an infrared suppressed visible light cell mathematical model, a cellular mathematical model and/or a cellular mathematical model.
In the visible light enhanced infrared cell, the infrared signal stimulation is dominant, so the infrared signal stimulation occupies a main position in a mathematical model of the cell, but the single action of the visible light signal stimulation does not produce response, the auxiliary enhancement effect is achieved, the enhancement effect of the visible light image can be represented by an exponential function, and finally the mathematical model of the visible light enhanced infrared cell is obtained as follows:
I+IR←V(i,j)=IIR(i,j)exp[IV(i,j)]
wherein, I+IR←V(I, j) denotes an image obtained after visible light-enhanced infrared, IIR(I, j) represents an infrared image, IV(i, j) represents a visible light image.
In the visible light inhibition infrared cell, the infrared signal stimulation is dominant, so the infrared signal stimulation occupies a main position in a mathematical model of the cell, but the single action of the visible light signal stimulation does not produce response and plays an auxiliary inhibition role, the inhibition effect of a visible light image can be represented by a logarithmic function, and finally the mathematical model of the visible light inhibition infrared cell is obtained as follows:
I-IR←V(i,j)=IIR(i,j)log[IV(i,j)+1]
wherein, I-IR←V(i, j) represents the image obtained after visible light suppresses infrared.
In the infrared enhanced visible light cell, the visible light signal stimulation is dominant, so the visible light signal stimulation occupies a main position in a mathematical model of the cell, and the infrared signal stimulation does not produce response under the independent action and plays an auxiliary enhancing role, the enhancing effect of an infrared image can be represented by an exponential function, and finally the mathematical model of the infrared enhanced visible light cell is obtained as follows:
I+V←IR(i,j)=IV(i,j)exp[IIR(i,j)]
wherein, I+V←IR(i, j) represents an image obtained after infrared enhancement of the visible light signal.
In the infrared inhibition visible light cells, visible light signal stimulation is dominant, so the visible light signal stimulation occupies a main position in a mathematical model of the cells, the infrared signal stimulation does not produce response under the independent action, the auxiliary inhibition effect is achieved, the inhibition effect of an infrared image can be represented by a logarithmic function, and finally the mathematical model of the infrared inhibition visible light cells is obtained as follows:
I-V←IR(i,j)=IV(i,j)log[IIR(i,j)+1]
wherein, I-V←IR(i, j) represents an image obtained after infrared suppression of visible light signals;
in the method, when two kinds of signal stimuli exist simultaneously in the cell, the cell has a relatively obvious response, the infrared signal and the visible light signal have no substantial difference, and only the magnitude of the respective stimulus intensities can influence the response, so that the combined effect of the visible light image and the infrared image can be simulated in a 'weighted sum' mode, and finally, the mathematical model of the cell is obtained as follows:
when I isV(i,j)<IR(i, j), the fusion result is:
IAND(i,j)=mIV(i,j)+nIR(i,j)
when I isV(i,j)>IR(i, j) the fusion result is
IAND(i,j)=nIV(i,j)+mIR(i,j)
Wherein m is>0.5,n<0.5,IANDAnd (i, j) represents an image obtained by weighting and acting on the infrared image and the visible light image.
For cells, either the infrared signal stimulus or the visible light stimulus alone will produce a response, while the presence of both signal stimuli will provide a gain or the cell will still produce a response.
In the cell, the response is generated by the independent action of any one of the infrared signal stimulation and the visible light stimulation, the gain effect is generated by the simultaneous existence of the two signal stimulations, and a cooperative and win-win partnership is embodied between the two signals, so that the cooperative effect of the visible light image and the infrared image is simulated in a weighting or mode, and finally the mathematical model of the cell is obtained or is as follows:
when I isV(i,j)<IR(i, j) the fusion result is
IOR(i,j)=nIV(i,j)+mIR(i,j)
When I isV(i,j)>IR(i, j) the fusion result is
IOR(i,j)=mIV(i,j)+nIR(i,j)
Wherein m is>0.5,n<0.5,IORAnd (i, j) represents an image obtained by weighting or applying the visible light image and the infrared image.
The six types of double-mode cell mathematical models of the rattlesnake are used for processing a visible light image (VIS) and an infrared Image (IR) to obtain or output a signal V U IR, and six types of double-mode cell model output signals of the rattlesnake, namely an output signal V U IR, an infrared enhanced visible light output signal + V ← IR, an infrared inhibited visible light output signal-V ← IR, a visible enhanced infrared output signal + IR ← V and a visible inhibited infrared output signal-IR ← V, are obtained.
Specifically, the enhanced image generation module 3 includes:
an enhanced image + OR _ AND generating unit for feeding the OR output signal V U IR to a central excitation region of the ON-central receptive field AND feeding the OR output signal V U IR to a surrounding inhibition region of the ON-central receptive field to generate an enhanced image + OR _ AND;
an enhanced image + VIS generating unit, for feeding the infrared enhanced visible light output signal + V ← IR into the central excitation region of the ON-center type receptive field, and feeding the infrared suppressed visible light output signal-V ← IR into the surround suppression region of the ON-center type receptive field, generating an enhanced image + VIS; and
and the enhanced image + IR generating unit is used for feeding the visible light enhanced infrared output signal + IR ← V into the central inhibition area of the OFF-central receptive field, and feeding the visible light inhibited infrared signal-IR ← V into the surrounding excitation area of the OFF-central receptive field to obtain the enhanced image + IR.
In this embodiment, the enhanced image generation module 3 performs enhancement processing on the output signals of the six types of double-mode cell models of the rattlesnake by using the visual receptive field and the mathematical model thereof to obtain an enhanced image.
The above-mentioned visual field and its mathematical model are explained as follows:
physiological characteristics indicate that the basic action mode of the receptor field of retinal nerve cells is the spatial antagonism of concentric circles, and the two types of action modes can be divided into: one is the ON-center/OFF-surround system (i.e., ON-center excitation/OFF surround suppression receptive field), commonly referred to as simply the ON-center receptive field, and the structure is shown as a in FIG. 3. And the other is the OFF-center/ON-surround system (i.e., OFF center suppression/ON surround excitation receptive field), commonly referred to as OFF-center receptive field for short, the structure of which is shown in FIG. 3 b. The ganglion cell receptor domain can be simulated by a Gaussian difference function model through mathematical modeling, the cell activity of different regions of the ganglion cell receptor domain can be described by Gaussian distribution, and the sensitivity of the ganglion cell receptor domain is gradually reduced from the center to the periphery.
One kinetic description of the center-surround antagonistic domain is the Passive membrane equation (Passive membrane evolution). According to the description of the visual perception field dynamics equation, the visual perception field dynamics equation is given as follows:
ON versus steady state output of the system:
Figure BDA0003118013200000111
OFF versus System steady state output:
Figure BDA0003118013200000112
wherein, Ck(i, j) and Sk(i, j) represent the convolution of the central input image and the surrounding input image with a gaussian function, respectively, a being an attenuation constant and E being a polarization constant.
Wherein, Ck(i, j) is the center of the receptive field, and the expression is:
Figure BDA0003118013200000113
Sk(i, j) is a receptive field surrounding area, and the expression is:
Figure BDA0003118013200000121
wherein, Ik(i, j) is the input image, is the convolution operator, Wc、WsAre the Gaussian distribution functions of the central region and the surrounding region, respectively, with the sizes of the Gaussian templates being respectively m × n and p × q, σc、σsThe spatial constants of the central and surrounding regions, respectively, are used to distinguish the central region (Center) from the surrounding region (Surround) using c, s as subscripts, respectively.
Specifically, the fusion signal generation module 4 includes:
an image feed-in unit, which is used for respectively feeding the enhanced image + OR _ AND, the enhanced image + VIS AND the enhanced image + IR into the central AND surrounding areas corresponding to the two ON-central receptive fields to obtain two image signals of fusion signal + VlS + OR _ AND AND fusion signal + VlS + IR; and
a linear OR operation unit for linearly OR-ing the enhanced image + VIS AND the enhanced image + OR _ AND generating a fusion signal + OR _ AND OR + VlS.
Finally, the false color fusion image generation module 5 uses the RGB color space to map the + VIS + OR _ AND, + OR _ AND OR + VlS, AND + VIS + IR fusion signals obtained in the fusion signal generation module to R, G, B three channels, respectively, AND uses the image obtained through the above processing as the false color fusion image finally generated.
On the other hand, referring to fig. 4 and fig. 5, the embodiment of the invention also discloses a bionic false color image fusion method based on the sidewinder visual imaging, which comprises the following steps:
s1: acquiring an infrared source image and a visible light source image to be processed;
s2: and inputting the acquired infrared source image and visible light source image into the bionic false color image fusion model based on the sidewinder visual imaging, and outputting a false color fusion image.
In summary, the embodiment of the invention designs a false color image fusion model based on a sidewinder vision system imaging system from a bionics perspective, which is used for acquiring an infrared light and visible light fusion image, and performs image preprocessing by extracting common information and specific information of the infrared and visible light images, thereby improving the quality of the fusion image. An image fusion structure is designed by introducing a double-mode cell mathematical model of the rattlesnake, so that a double-mode cell fusion mechanism of the rattlesnake is effectively utilized, and a visual perception mechanism of the rattlesnake is better simulated. Meanwhile, the bionic false color image fusion method better simulates the fusion mechanism of the rattlesnake on the infrared image and the visible light image, the obtained fusion image is improved in color performance, target positions such as people and the like can be better presented in the fusion image, better performance can be realized in certain details, the influence of illumination, smoke shielding and weather conditions on the imaging effect can be better improved, the visual characteristics of human eyes are better met, and the method is convenient for later-stage personnel to observe, understand and further study.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A bionic false color image fusion model based on sidewinder visual imaging is characterized by comprising the following components:
the image preprocessing module is used for extracting common information and specific information of the input infrared source image and the input visible light source image and preprocessing the infrared source image and the visible light source image;
the double-mode cell mechanism simulation module of the rattlesnake performs double-mode cell mechanism simulation on the preprocessed infrared source image and visible light source image through a double-mode cell mathematical model of the rattlesnake to obtain six output signals of the double-mode cell mechanism of the rattlesnake;
the enhanced image generation module is used for enhancing the output signals of the six types of double-mode cell models of the rattlesnake to obtain enhanced images;
the fusion signal generation module is used for carrying out fusion processing on the enhanced image to obtain a fusion signal; and
and the false color fusion image generation module is used for mapping the fusion signal to different color channels of an RGB color space to generate a false color fusion image.
2. The bionic false color image fusion model based on sidewinder visual imaging of claim 1, wherein the image preprocessing module comprises:
a common information acquisition unit for acquiring common information components of the infrared source image and the visible light source image;
a unique information acquisition unit for acquiring unique information components of the infrared source image and the visible light source image; and
the preprocessing unit is used for subtracting the specific information component of the visible light source image from the infrared source image to obtain a preprocessing result of the infrared source image, and subtracting the specific information component of the infrared source image from the visible light source image to obtain a preprocessing result of the visible light source image.
3. The bionic false color image fusion model based on the Crotalus viridis visual imaging is characterized in that the calculation formula of the common information components of the infrared source image and the visible source image is as follows:
Ir(i,j)∩Ivis(i,j)=min{Ir(i,j),Ivis(i,j)}
wherein, Ir(I, j) represents an infrared source image, Ivis(I, j) represents the visible light source image, (I, j) represents a certain pixel point corresponding to the two images, Ir(i,j)∩Ivis(i, j) represents a common information component of both;
the calculation formulas of the specific information components of the infrared source image and the visible light source image are respectively as follows:
Ir(i,j)*=Ir(i,j)-Ir(i,j)∩Ivis(i,j)
Ivis(i,j)*=Ivis(i,j)-Ir(i,j)∩Ivis(i,j)
wherein, Ir(i,j)*Representing an image of an infrared source Ir(I, j) a unique information component, Ivis(i,j)*Representing a visible light source image Ivis(i, j) a unique information component.
4. The bionic false color image fusion model based on the visual imaging of the sidewinder according to claim 1, wherein the dual-mode cellular mathematical model of the sidewinder comprises a visible light enhanced infrared cellular mathematical model, a visible light suppressed infrared cellular mathematical model, an infrared enhanced visible light cellular mathematical model, an infrared suppressed visible light cellular mathematical model, a cellular mathematical model and/or a cellular mathematical model.
5. The bionic false color image fusion model based on the visual imaging of the rattlesnake as claimed in claim 4, wherein the expression of the visible light enhanced infrared cell mathematical model is as follows:
I+IR←V(i,j)=IIR(i,j)exp[IV(i,j)]
wherein, I+IR←V(I, j) denotes an image obtained after visible light-enhanced infrared, IIR(I, j) represents an infrared image, IV(i, j) represents a visible light image;
the expression of the mathematical model of the visible light inhibition infrared cell is as follows:
I-IR←V(i,j)=IIR(i,j)log[IV(i,j)+1]
wherein, I-IR←V(i, j) represents an image obtained after visible light inhibits infrared;
the expression of the infrared enhanced visible light cell mathematical model is as follows:
I+V←IR(i,j)=IV(i,j)exp[IIR(i,j)]
wherein, I+V←IR(i, j) represents an image obtained after infrared enhancement of the visible light signal;
the expression of the mathematical model of the infrared inhibition visible light cell is as follows:
I-V←IR(i,j)=IV(i,j)log[IIR(i,j)+1]
wherein, I-V←IR(i, j) represents infrared-suppressed visible light messageImages obtained after number;
the expression with the cell mathematical model is as follows:
when I isV(i,j)<IR(i, j), the fusion result is:
IAND(i,j)=mIV(i,j)+nIR(i,j)
when I isV(i,j)>IR(i, j), the fusion result is:
IAND(i,j)=nIV(i,j)+mIR(i,j)
wherein m is more than 0.5, n is less than 0.5, IAND(i, j) represents an image obtained after the infrared image and the visible light image are weighted and acted;
the expression of the mathematical model of the cell is:
when I isV(i,j)<IR(i, j), the fusion result is:
IOR(i,j)=nIV(i,j)+mIR(i,j)
when I isV(i,j)>IR(i, j), the fusion result is:
IOR(i,j)=mIV(i,j)+nIR(i,j)
wherein m is more than 0.5, n is less than 0.5, IORAnd (i, j) represents an image obtained by weighting or applying the visible light image and the infrared image.
6. The bionic false color image fusion model based on sidewinder visual imaging is characterized in that the six sidewinder dual-mode cell model output signals comprise an AND output signal or an output signal, an infrared enhanced visible light output signal, an infrared suppressed visible light output signal, a visible light enhanced infrared output signal and a visible light suppressed infrared output signal.
7. The bionic false color image fusion model based on sidewinder visual imaging of claim 6, wherein the enhanced image generation module comprises:
an enhanced image + OR _ AND generating unit for feeding the OR output signal AND the AND output signal to a central excitation region AND a surround suppression region of an ON-center type receptive field, respectively, generating an enhanced image + OR _ AND;
an enhanced image + VIS generation unit for feeding the infrared enhanced visible light output signal and the infrared suppressed visible light output signal to a central excitation region and a surrounding suppression region of an ON-central receptive field, respectively, to generate an enhanced image + VIS; and
an enhanced image + IR generating unit for feeding the visible enhanced infrared output signal and the visible suppressed infrared signal to a central suppressed region and a surrounding excited region of the OFF-central receptive field, respectively, resulting in an enhanced image + IR.
8. The bionic false color image fusion model based on sidewinder visual imaging of claim 7, wherein the fusion signal generation module comprises:
an image feed-in unit for feeding the enhanced image + OR _ AND, the enhanced image + VIS AND the enhanced image + IR into the central AND surrounding regions corresponding to the two ON-center type receptive fields, respectively, obtaining a fusion signal + V1S + OR _ AND AND a fusion signal + V1S + IR; and
a linear OR operation unit for linearly OR-ing the enhanced image + VIS AND the enhanced image + OR _ AND generating a fusion signal + OR _ AND OR + V1S.
9. A bionic false color image fusion method based on sidewinder visual imaging is characterized by comprising the following steps:
acquiring an infrared source image and a visible light source image to be processed;
inputting the acquired infrared source image and visible light source image into a bionic false color image fusion model based on the Crotalus viridis visual imaging according to any one of claims 1-8, and outputting a false color fusion image.
CN202110667804.7A 2021-06-16 2021-06-16 Bionic false color image fusion model and method based on croaker visual imaging Active CN113409232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110667804.7A CN113409232B (en) 2021-06-16 2021-06-16 Bionic false color image fusion model and method based on croaker visual imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110667804.7A CN113409232B (en) 2021-06-16 2021-06-16 Bionic false color image fusion model and method based on croaker visual imaging

Publications (2)

Publication Number Publication Date
CN113409232A true CN113409232A (en) 2021-09-17
CN113409232B CN113409232B (en) 2023-11-10

Family

ID=77684422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110667804.7A Active CN113409232B (en) 2021-06-16 2021-06-16 Bionic false color image fusion model and method based on croaker visual imaging

Country Status (1)

Country Link
CN (1) CN113409232B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090018990A1 (en) * 2007-07-12 2009-01-15 Jorge Moraleda Retrieving Electronic Documents by Converting Them to Synthetic Text
WO2011004381A1 (en) * 2009-07-08 2011-01-13 Yogesh Chunilal Rathod An apparatus, system, and method for automated production of rule based near live sports event in the form of a video film for entertainment
CN102924596A (en) * 2005-04-29 2013-02-13 詹森生物科技公司 Anti-il-6 antibodies, compositions, methods and uses
CN104835129A (en) * 2015-04-07 2015-08-12 杭州电子科技大学 Two-band image fusion method by using local window visual attention extraction
CN106952246A (en) * 2017-03-14 2017-07-14 北京理工大学 The visible ray infrared image enhancement Color Fusion of view-based access control model attention characteristic
CN108090888A (en) * 2018-01-04 2018-05-29 北京环境特性研究所 The infrared image of view-based access control model attention model and the fusion detection method of visible images
CN108133470A (en) * 2017-12-11 2018-06-08 深圳先进技术研究院 Infrared image and low-light coloured image emerging system and method
CN108711146A (en) * 2018-04-19 2018-10-26 中国矿业大学 A kind of coal petrography identification device and method based on visible light and infrared image fusion
CN110120028A (en) * 2018-11-13 2019-08-13 中国科学院深圳先进技术研究院 A kind of bionical rattle snake is infrared and twilight image Color Fusion and device
CN110211083A (en) * 2019-06-10 2019-09-06 北京宏大天成防务装备科技有限公司 A kind of image processing method and device
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision
CN111724333A (en) * 2020-06-09 2020-09-29 四川大学 Infrared image and visible light image fusion method based on early visual information processing

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102924596A (en) * 2005-04-29 2013-02-13 詹森生物科技公司 Anti-il-6 antibodies, compositions, methods and uses
US20090018990A1 (en) * 2007-07-12 2009-01-15 Jorge Moraleda Retrieving Electronic Documents by Converting Them to Synthetic Text
WO2011004381A1 (en) * 2009-07-08 2011-01-13 Yogesh Chunilal Rathod An apparatus, system, and method for automated production of rule based near live sports event in the form of a video film for entertainment
CN104835129A (en) * 2015-04-07 2015-08-12 杭州电子科技大学 Two-band image fusion method by using local window visual attention extraction
CN106952246A (en) * 2017-03-14 2017-07-14 北京理工大学 The visible ray infrared image enhancement Color Fusion of view-based access control model attention characteristic
CN108133470A (en) * 2017-12-11 2018-06-08 深圳先进技术研究院 Infrared image and low-light coloured image emerging system and method
CN108090888A (en) * 2018-01-04 2018-05-29 北京环境特性研究所 The infrared image of view-based access control model attention model and the fusion detection method of visible images
CN108711146A (en) * 2018-04-19 2018-10-26 中国矿业大学 A kind of coal petrography identification device and method based on visible light and infrared image fusion
CN110120028A (en) * 2018-11-13 2019-08-13 中国科学院深圳先进技术研究院 A kind of bionical rattle snake is infrared and twilight image Color Fusion and device
CN110211083A (en) * 2019-06-10 2019-09-06 北京宏大天成防务装备科技有限公司 A kind of image processing method and device
CN110458877A (en) * 2019-08-14 2019-11-15 湖南科华军融民科技研究院有限公司 The infrared air navigation aid merged with visible optical information based on bionical vision
CN111724333A (en) * 2020-06-09 2020-09-29 四川大学 Infrared image and visible light image fusion method based on early visual information processing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
倪国强 等: ""基于响尾蛇双模式细胞机理的可见光/红外图像彩色融合技术的优势和前景展望"", vol. 24, no. 2, pages 95 - 100 *
王勇 等: ""Pseudo color image fusion based on rattlesnake\'s visual receptive field model\"", pages 596 - 600 *

Also Published As

Publication number Publication date
CN113409232B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
Sagiv et al. Structural encoding of human and schematic faces: holistic and part-based processes
CN109924990A (en) A kind of EEG signals depression identifying system based on EMD algorithm
US9061150B2 (en) Saliency-based apparatus and methods for visual prostheses
Pratarelli Semantic processing of pictures and spoken words: Evidence from event-related brain potentials
Susilo et al. The composite effect for inverted faces is reliable at large sample sizes and requires the basic face configuration
Blank et al. Mechanisms of enhancing visual–speech recognition by prior auditory information
CN109859139B (en) Blood vessel enhancement method for color fundus image
CN108133470A (en) Infrared image and low-light coloured image emerging system and method
Fazlyyyakhmatov et al. The EEG activity during binocular depth perception of 2D images
CN107563997A (en) A kind of skin disease diagnostic system, construction method, diagnostic method and diagnostic device
CN102222231B (en) Visual attention information computing device based on guidance of dorsal pathway and processing method thereof
Thorat et al. Body shape as a visual feature: Evidence from spatially-global attentional modulation in human visual cortex
CN112991250B (en) Infrared and visible light image fusion method based on sonodon acutus visual imaging
CN101241593A (en) Picture layer image processing unit and its method
CN113409232B (en) Bionic false color image fusion model and method based on croaker visual imaging
CN117056786A (en) Non-contact stress state identification method and system
Zhang et al. Bionic algorithm for color fusion of infrared and low light level image based on rattlesnake bimodal cells
Wang et al. Pseudo color fusion of infrared and visible images based on the rattlesnake vision imaging system
CN111588345A (en) Eye disease detection method, AR glasses and readable storage medium
McCarthy et al. Augmenting intensity to enhance scene structure in prosthetic vision
Hills et al. An adaptation study of internal and external features in facial representations
Wang et al. Image fusion based on the rattlesnake visual receptive field model
Zhang Computer Vision Overview
CN113407026B (en) Brain-computer interface system and method for enhancing hairless zone brain electric response intensity
Zhang et al. Access to awareness is improved by affective learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant