CN111724333A - Infrared image and visible light image fusion method based on early visual information processing - Google Patents
Infrared image and visible light image fusion method based on early visual information processing Download PDFInfo
- Publication number
- CN111724333A CN111724333A CN202010516394.1A CN202010516394A CN111724333A CN 111724333 A CN111724333 A CN 111724333A CN 202010516394 A CN202010516394 A CN 202010516394A CN 111724333 A CN111724333 A CN 111724333A
- Authority
- CN
- China
- Prior art keywords
- image
- visible light
- light image
- receptive field
- infrared image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 22
- 230000010365 information processing Effects 0.000 title claims abstract description 14
- 238000007500 overflow downdraw method Methods 0.000 title abstract description 10
- 210000002569 neuron Anatomy 0.000 claims abstract description 50
- 230000004044 response Effects 0.000 claims abstract description 38
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims description 25
- 230000002093 peripheral effect Effects 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 7
- 230000007246 mechanism Effects 0.000 claims description 6
- 230000008906 neuronal response Effects 0.000 claims description 6
- 230000001054 cortical effect Effects 0.000 claims description 3
- 230000002739 subcortical effect Effects 0.000 claims description 3
- 102200159389 rs58999456 Human genes 0.000 claims description 2
- 230000004438 eyesight Effects 0.000 abstract description 6
- 238000001514 detection method Methods 0.000 abstract description 5
- 230000004297 night vision Effects 0.000 abstract description 5
- 238000004364 calculation method Methods 0.000 description 3
- 230000008485 antagonism Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 description 1
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an infrared image and visible light image fusion method based on early visual information processing, which comprises the following steps: s1, processing dynamic receptive fields of the On central neurons and the Off central neurons; s2, obtaining a fused image A and a fused image B: fusing the On central neuron response of the visible light image, the Off central neuron response of the infrared image and the original visible light image to obtain an image A; fusing the On central neuron response of the infrared image, the Off central neuron response of the visible light image and the original infrared image to obtain an image B: and S3, adding the fused images A and B to obtain a final fusion result. The invention can effectively fuse the significant target information in the infrared image and the background information in the visible light image, and provides more effective characteristics for computer vision tasks under night vision conditions such as follow-up high-value target detection and identification.
Description
Technical Field
The invention belongs to the technical field of computer vision and image processing, relates to fusion of an infrared image and a visible light image, and particularly relates to an infrared image and visible light image fusion method based on early visual information processing.
Background
The fusion of the infrared image and the visible light image is an important image enhancement technology, and the characteristics of different wave bands are fused together to obtain an image with richer information. The visible image always shows the background in great detail, while the infrared image clearly shows a possible disguised object for visual detection and identification. The fusion of the visible light image and the infrared image may show more information than a single image. For decades, the image fusion problem has been developed under different schemes, wherein representative technical routes include a multi-scale feature decomposition-based method, a sparse representation-based method, a significant feature extraction-based method, and other advanced fusion methods. In recent years, with the rise of deep learning, there is a method of applying a neural network to image fusion, and most of the methods obtain more reliable results.
For most fusion methods, most of them are inspired by non-biological visual mechanisms. Waxman et al, simulates the color antagonism and spatial antagonism information processing mechanisms of the biological vision system to fuse two gray level source images into a color image. Reference documents: waxman AM, Gove A N, Fay D A, et al, color right vision: open processing in the fusion of vision and IR image [ J ]. Neural Networks,1997,10(1):1-6.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the infrared image and visible light image fusion method based on early visual information processing, which can effectively fuse the significant target information in the infrared image and the background information in the visible light image to obtain a fusion image with more abundant information and provide more effective characteristics for computer visual tasks under night vision conditions such as subsequent high-value target detection and identification.
The purpose of the invention is realized by the following technical scheme: the method for fusing the infrared image and the visible light image based on the early visual information processing comprises the following steps:
s1, dynamic receptive field processing of On-center neurons and Off-center neurons: carrying out convolution filtering processing on the input infrared image and the input visible light image by using a dynamic receptive field model;
s2, obtaining a fused image A and a fused image B: fusing the On central neuron response of the visible light image, the Off central neuron response of the infrared image and the original visible light image according to the contrast level of the visible light image to obtain a fused image A; fusing the On central neuron response of the infrared image, the Off central neuron response of the visible light image and the original infrared image according to the contrast level of the infrared image to obtain a fused image B:
and S3, adding the fused images A and B to obtain the final fusion result of the infrared image and the visible light image.
Further, the step S1 is specifically implemented as follows: the method comprises the following steps of performing convolution filtering processing on an input infrared image and an input visible light image by using a dynamic receptive field model, and specifically comprises the following steps:
wherein ,andrespectively represent the response of the visible light image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron,andrespectively representing the response of the infrared image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron, wherein the superscript symbols V and IR respectively represent a visible light image and an infrared image, and x and y represent the coordinates of a pixel; CRF (x, y) and SRF (x, y) represent responses of the dynamic receptive field central receptive field and the peripheral receptive field, respectively, b (x, y) represents a weight of the peripheral receptive field, and symbol max represents a value equal to or greater than 0 for the response results after the dynamic receptive field of the On-centered neuron and the dynamic receptive field of the Off-centered neuron are processed;
the central receptive field response CRF (x, y) is calculated by the following equation:
CRF(x,y)=I(x,y)*DRF(x,y;d) (5)
in formula (5), I (x, y) represents an input visible light image or infrared image, and DRF (x, y; d) represents a dynamic gaussian filter:
d in the formula (6) represents the dimension of the dynamic Gaussian filter, and the value range of d is { sigma, lambda sigma }; the parameter σ represents the standard deviation of the gaussian function, and takes all real numbers in a range of [0.5, + ∞); λ represents the maximum dimension that can be achieved by controlling the dynamic gaussian filter, and all real numbers with a numeric range of [1, + ∞ ];
the dimension d of the dynamic gaussian filter is linked to the local contrast of the image:
d∝ΔI-1(x,y;σ) (7)
equation (7) shows that the dimension d of the dynamic Gaussian filter is inversely proportional to the local contrast Δ I (x, y; σ) of the image;
the response SRF (x, y) of the peripheral receptive field is calculated specifically from the following equation:
SRF(x,y)=I(x,y)*DRF(x,y;3σ) (8)
the weight b (x, y) of the peripheral receptive field is related to the local contrast of the image:
b(x,y)∝ΔI-1(x,y;3σ) (9)
equation (9) indicates that the weight b (x, y) of the peripheral receptive field is inversely proportional to the local contrast Δ I (x, y; 3 σ) of the image.
Further, the specific calculation method of the local contrasts Δ I (x, y; σ) and Δ I (x, y; 3 σ) is to select local regions having region sizes of σ and 3 σ around each pixel in the image I (x, y), and the local standard deviation calculated with the pixel as the center is taken as the local contrast of the pixel.
Further, the specific implementation method of step S2 is as follows: simulating cortical and subcortical visual information fusion mechanism of biological visual system, and responding On central neuron of visible light imageOff-centric neuronal response of infrared imagesAnd the original visible light image IVAnd (x, y) fusing the three components according to the contrast level of the visible light image to obtain a fused image A:
similarly, the On-centered neuron response of the infrared imageOff-centered neuronal response of visible light imagesAnd an original infrared image IIRAnd (x, y) fusing the three components according to the contrast level of the infrared image to obtain a fused image B:
wherein the weight parameter β is calculated according to the following formula:
representing the average, parameter, of the local contrast Δ I (x, y; σ)For controlling the slope of equation (12).
Further, the specific implementation method of the image fusion in step S3 is as follows: images a and B are added according to the following formula: (A (x, y) + B (x, y))/2.
The invention has the beneficial effects that: the method can realize the on-line real-time fusion and parameter self-adaptation of the infrared image and the visible light image, and can effectively fuse the significant target information in the infrared image and the background information in the visible light image to obtain a fused image with richer information. As a multi-source information fusion method, the method can be embedded in equipment such as a night vision device and the like, is applied to effective detection and acquisition of high-value targets in a night vision environment, and provides more effective characteristics for computer vision tasks under night vision conditions such as subsequent detection and identification of the high-value targets.
Drawings
FIG. 1 is a flow chart of an image fusion method of the present invention;
FIG. 2 is an original visible light image and an infrared image used in the present embodiment;
FIG. 3 is a diagram illustrating the results of processing the original visible light image and the infrared image using the dynamic receptive field according to the present embodiment;
FIG. 4 is a fused image A and a fused image B;
fig. 5 is a fused image finally obtained in the present embodiment.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, the method for fusing an infrared image and a visible light image based on early visual information processing of the present invention includes the following steps:
s1, dynamic receptive field processing of On-center neurons and Off-center neurons: carrying out convolution filtering processing on the input infrared image and the input visible light image by using a dynamic receptive field model; the specific implementation method comprises the following steps: the method comprises the following steps of performing convolution filtering processing on an input infrared image and an input visible light image by using a dynamic receptive field model, and specifically comprises the following steps:
wherein ,andrespectively represent the response of the visible light image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron,andrespectively representing the response of the infrared image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron, wherein the superscript symbols V and IR respectively represent a visible light image and an infrared image, and x and y represent the coordinates of a pixel; CRF (x, y) and SRF (x, y) represent responses of the dynamic receptive field central receptive field and the peripheral receptive field, respectively, b (x, y) represents a weight of the peripheral receptive field, and symbol max represents a value equal to or greater than 0 for the response results after the dynamic receptive field of the On-centered neuron and the dynamic receptive field of the Off-centered neuron are processed;
the central receptive field response CRF (x, y) is calculated by the following equation:
CRF(x,y)=I(x,y)*DRF(x,y;d) (5)
in formula (5), I (x, y) represents an input visible light image or infrared image, and DRF (x, y; d) represents a dynamic gaussian filter:
d in the formula (6) represents the dimension of the dynamic Gaussian filter, and the value range of d is { sigma, lambda sigma }; the parameter σ represents the standard deviation of the gaussian function, and takes all real numbers in a range of [0.5, + ∞); λ represents the maximum dimension that can be achieved by controlling the dynamic gaussian filter, and all real numbers with a numeric range of [1, + ∞ ];
the dimension d of the dynamic gaussian filter is linked to the local contrast of the image:
d∝ΔI-1(x,y;σ) (7)
equation (7) shows that the dimension d of the dynamic Gaussian filter is inversely proportional to the local contrast Δ I (x, y; σ) of the image; examples of which are as follows: the local contrast is normalized to be in the range of [0,1] and is divided into N levels, and the value range of N is all integers of [2,64 ]. Here, taking N as 3 as an example, the normalized local contrast is divided into [0,13], [13,23], [23,1]3 levels. Since d belongs to { σ, λ σ }, where λ ═ 3 is taken as an example, the value of d is 3 σ in the range of local contrast [0,13], d is 2 σ in the range of local contrast [13,23], d is σ in the range of local contrast [23,1], that is, the value of d is inversely proportional to the local contrast of the image, and the smaller the local contrast is, the larger the value of d is, and vice versa.
The response SRF (x, y) of the peripheral receptive field is calculated specifically from the following equation:
SRF(x,y)=I(x,y)*DRF(x,y;3σ) (8)
the weight b (x, y) of the peripheral receptive field is related to the local contrast of the image:
b(x,y)∝ΔI-1(x,y;3σ) (9)
formula (9) shows that the weight b (x, y) of the peripheral receptive field is inversely proportional to the local contrast delta I (x, y; 3 sigma) of the image, and the value range of b (x, y) is [ -1,0 ]; examples of which are as follows: the local contrast is normalized to be in the range of [0,1] and is divided into N levels, and the value range of N is all integers of [2,64 ]. Here, taking N-3 as an example, the normalized local contrast is divided into three levels of [0,1/3], [1/3,2/3], [2/3,1 ]. Since b (x, y) is [ -1,0], the value of b (x, y) is-1/3 in the local contrast [0,13], 2/3 in the local contrast [1/3,2/3], and-1 in the local contrast [2/3,1], that is, bx (, y) is inversely proportional to the local contrast of the image, the larger the local contrast, the smaller the value of b (x, y), and vice versa.
The specific calculation method of the local contrasts DeltaI (x, y; sigma) and DeltaI (x, y; 3 sigma) is to select local regions with the region sizes of sigma and 3 sigma respectively around each pixel in the image I (x, y), and the local standard deviation calculated by taking the pixel as the center is used as the local contrast of the pixel.
S2, obtaining a fused image A and a fused image B: fusing the On central neuron response of the visible light image, the Off central neuron response of the infrared image and the original visible light image according to the contrast level of the visible light image to obtain a fused image A; fusing the On central neuron response of the infrared image, the Off central neuron response of the visible light image and the original infrared image according to the contrast level of the infrared image to obtain a fused image B:
the specific implementation method comprises the following steps: simulating cortical and subcortical visual information fusion mechanism of biological visual system, and responding On central neuron of visible light imageOff-centric neuronal response of infrared imagesAnd the original visible light image IVAnd (x, y) fusing the three components according to the contrast level of the visible light image to obtain a fused image A:
similarly, the On-centered neuron response of the infrared imageOff-centered neuronal response of visible light imagesAnd an original infrared image IIRAnd (x, y) fusing the three components according to the contrast level of the infrared image to obtain a fused image B:
wherein the weight parameter β is calculated according to the following formula:
representing the average, parameter, of the local contrast Δ I (x, y; σ)For controlling the slope, parameters of equation (12)Is any real number within the range of [0.5, + ∞).
S3, adding the fused images A and B to obtain a final fusion result of the infrared image and the visible light image; the specific implementation method comprises the following steps: images a and B are added according to the following formula: (A (x, y) + B (x, y))/2.
The present embodiment uses an infrared image and a visible light image (as shown in fig. 2, the images are derived from a TNO image dataset and are named "capture", and (a), (b) are the visible light image and the infrared image, respectively) with an image size of 270 × 360, and the parameter settings in the present embodiment are as follows, σ is 0.5, λ is 3,n is 3. The result of the dynamic receptive field processing using the On-center neurons and the Off-center neurons for a pair of the visible light image and the infrared image named "vegeation" in step S1 (equations (1) to (4)):as shown in the images (a), (B), (c), and (d) in fig. 3, the images a and B are calculated according to the equations (10) to (11), and the results a (x, y) (shown in fig. 4 (a)) and B (x, y) (shown in fig. 4 (B)) are obtained, and the two images are added according to the method of S3 to obtain the fusion result, as shown in fig. 5.
The above simple example is mainly illustrated and shown by taking the whole image as an example, the actual calculation is realized by performing corresponding operations such as local convolution filtering, addition, subtraction, multiplication, division and the like on all pixels of the whole image, and the actual numerical values and results are also experimental results directly valued in program operation. By such a simple example, the whole process of the infrared image and visible light image fusion method based on the early visual information processing mechanism is explained.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.
Claims (6)
1. The method for fusing the infrared image and the visible light image based on the early visual information processing is characterized by comprising the following steps of:
s1, dynamic receptive field processing of On-center neurons and Off-center neurons: carrying out convolution filtering processing on the input infrared image and the input visible light image by using a dynamic receptive field model;
s2, obtaining a fused image A and a fused image B: fusing the On central neuron response of the visible light image, the Off central neuron response of the infrared image and the original visible light image according to the contrast level of the visible light image to obtain a fused image A; fusing the On central neuron response of the infrared image, the Off central neuron response of the visible light image and the original infrared image according to the contrast level of the infrared image to obtain a fused image B:
and S3, adding the fused images A and B to obtain the final fusion result of the infrared image and the visible light image.
2. The method for fusing an infrared image and a visible light image based on early visual information processing according to claim 1, wherein the step S1 is implemented by: the method comprises the following steps of performing convolution filtering processing on an input infrared image and an input visible light image by using a dynamic receptive field model, and specifically comprises the following steps:
wherein ,andrespectively represent the response of the visible light image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron,andrespectively representing the response of the infrared image after the dynamic receptive field processing of the On central neuron and the dynamic receptive field processing of the Off central neuron, wherein the superscript symbols V and IR respectively represent a visible light image and an infrared image, and x and y represent the coordinates of a pixel; CRF (x, y) and SRF (x, y) represent dynamic receptive fieldsThe response of the heart receptive field and the response of the peripheral receptive field, b (x, y) represents the weight of the peripheral receptive field, and the symbol max represents that the response results after the dynamic receptive field of the On-center neuron and the dynamic receptive field of the Off-center neuron are greater than or equal to 0;
the central receptive field response CRF (x, y) is calculated by the following equation:
CRF(x,y)=I(x,y)*DRF(x,y;d) (5)
in formula (5), I (x, y) represents an input visible light image or infrared image, and DRF (x, y; d) represents a dynamic gaussian filter:
d in the formula (6) represents the dimension of the dynamic Gaussian filter, and the value range of d is { sigma, lambda sigma }; the parameter σ represents the standard deviation of the gaussian function, and takes all real numbers in a range of [0.5, + ∞); λ represents the maximum dimension that can be achieved by controlling the dynamic gaussian filter, and all real numbers with a numeric range of [1, + ∞ ];
the dimension d of the dynamic gaussian filter is linked to the local contrast of the image:
d∝ΔI-1(x,y;σ) (7)
equation (7) shows that the dimension d of the dynamic Gaussian filter is inversely proportional to the local contrast Δ I (x, y; σ) of the image;
the response SRF (x, y) of the peripheral receptive field is calculated specifically from the following equation:
SRF(x,y)=I(x,y)*DRF(x,y;3σ) (8)
the weight b (x, y) of the peripheral receptive field is related to the local contrast of the image:
b(x,y)∝ΔI-1(x,y;3σ) (9)
equation (9) indicates that the weight b (x, y) of the peripheral receptive field is inversely proportional to the local contrast Δ I (x, y; 3 σ) of the image.
3. The method of claim 2, wherein the local contrasts Δ I (x, y; σ) and Δ I (x, y; 3 σ) are calculated by selecting local regions with region sizes σ and 3 σ around each pixel in the image I (x, y), and the local standard deviation calculated with the pixel as the center is used as the local contrast of the pixel.
4. The method for fusing an infrared image and a visible light image based on early visual information processing according to claim 1, wherein the step S2 is implemented by: simulating cortical and subcortical visual information fusion mechanism of biological visual system, and responding On central neuron of visible light imageOff-centric neuronal response of infrared imagesAnd the original visible light image IVAnd (x, y) fusing the three components according to the contrast level of the visible light image to obtain a fused image A:
similarly, the On-centered neuron response of the infrared imageOff-centered neuronal response of visible light imagesAnd an original infrared image IIRAnd (x, y) fusing the three components according to the contrast level of the infrared image to obtain a fused image B:
wherein the weight parameter β is calculated according to the following formula:
6. The method for fusing an infrared image and a visible light image based on early visual information processing according to claim 4, wherein the image fusion in step S3 is implemented by: images a and B are added according to the following formula: (A (x, y) + B (x, y))/2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010516394.1A CN111724333B (en) | 2020-06-09 | 2020-06-09 | Infrared image and visible light image fusion method based on early visual information processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010516394.1A CN111724333B (en) | 2020-06-09 | 2020-06-09 | Infrared image and visible light image fusion method based on early visual information processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111724333A true CN111724333A (en) | 2020-09-29 |
CN111724333B CN111724333B (en) | 2023-05-30 |
Family
ID=72566263
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010516394.1A Active CN111724333B (en) | 2020-06-09 | 2020-06-09 | Infrared image and visible light image fusion method based on early visual information processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111724333B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113409232A (en) * | 2021-06-16 | 2021-09-17 | 吉林大学 | Bionic false color image fusion model and method based on sidewinder visual imaging |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103985115A (en) * | 2014-04-01 | 2014-08-13 | 杭州电子科技大学 | Image multi-strength edge detection method having visual photosensitive layer simulation function |
WO2015157058A1 (en) * | 2014-04-07 | 2015-10-15 | Bae Systems Information & Electronic Systems Integration Inc. | Contrast based image fusion |
CN110120028A (en) * | 2018-11-13 | 2019-08-13 | 中国科学院深圳先进技术研究院 | A kind of bionical rattle snake is infrared and twilight image Color Fusion and device |
CN110427823A (en) * | 2019-06-28 | 2019-11-08 | 北京大学 | Joint objective detection method and device based on video frame and pulse array signals |
CN110458877A (en) * | 2019-08-14 | 2019-11-15 | 湖南科华军融民科技研究院有限公司 | The infrared air navigation aid merged with visible optical information based on bionical vision |
-
2020
- 2020-06-09 CN CN202010516394.1A patent/CN111724333B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103985115A (en) * | 2014-04-01 | 2014-08-13 | 杭州电子科技大学 | Image multi-strength edge detection method having visual photosensitive layer simulation function |
WO2015157058A1 (en) * | 2014-04-07 | 2015-10-15 | Bae Systems Information & Electronic Systems Integration Inc. | Contrast based image fusion |
CN110120028A (en) * | 2018-11-13 | 2019-08-13 | 中国科学院深圳先进技术研究院 | A kind of bionical rattle snake is infrared and twilight image Color Fusion and device |
CN110427823A (en) * | 2019-06-28 | 2019-11-08 | 北京大学 | Joint objective detection method and device based on video frame and pulse array signals |
CN110458877A (en) * | 2019-08-14 | 2019-11-15 | 湖南科华军融民科技研究院有限公司 | The infrared air navigation aid merged with visible optical information based on bionical vision |
Non-Patent Citations (4)
Title |
---|
MIN-JIE TAN 等: "Visible-Infrared Image Fusion Based on Early Visual Information Processing Mechanisms" * |
ZHEN ZHANG 等: "Bionic Algorithm for Color Fusion of Infrared and Low Light Level Image Based on Rattlesnake Bimodal Cells" * |
倪国强 等: "基于响尾蛇双模式细胞机理的可见光/红外图像彩色融合技术的优势和前景展望" * |
罗佳骏 等: "基于视觉感光层功能的菌落图像多强度边缘检测研究" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113409232A (en) * | 2021-06-16 | 2021-09-17 | 吉林大学 | Bionic false color image fusion model and method based on sidewinder visual imaging |
Also Published As
Publication number | Publication date |
---|---|
CN111724333B (en) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sezer et al. | Detection of solder paste defects with an optimization‐based deep learning model using image processing techniques | |
RU2709661C1 (en) | Training neural networks for image processing using synthetic photorealistic containing image signs | |
Raju et al. | A fast and efficient color image enhancement method based on fuzzy-logic and histogram | |
Padmavathi et al. | Implementation of RGB and grayscale images in plant leaves disease detection–comparative study | |
CN106952246A (en) | The visible ray infrared image enhancement Color Fusion of view-based access control model attention characteristic | |
CN101980248A (en) | Improved visual attention model-based method of natural scene object detection | |
Gu et al. | Research on the improvement of image edge detection algorithm based on artificial neural network | |
CN112580661B (en) | Multi-scale edge detection method under deep supervision | |
CN110246154B (en) | Visual target tracking method based on ICA-R multi-feature fusion and self-adaptive updating | |
Wang et al. | Automatic illumination planning for robot vision inspection system | |
CN109598200B (en) | Intelligent image identification system and method for molten iron tank number | |
Khavalko et al. | Image classification and recognition on the base of autoassociative neural network usage | |
CN111724333B (en) | Infrared image and visible light image fusion method based on early visual information processing | |
Drass et al. | Semantic segmentation with deep learning: detection of cracks at the cut edge of glass | |
Chen et al. | Image processing and understanding based on the fuzzy inference approach | |
Zhuang et al. | Image enhancement by deep learning network based on derived image and retinex | |
Boyun | The principles of organizing the search for an object in an image, tracking an object and the selection of informative features based on the visual perception of a person | |
Singh et al. | Multiscale reflection component based weakly illuminated nighttime image enhancement | |
WO2022044673A1 (en) | Image processing device, inspection system, and inspection method | |
Yu et al. | Detail enhancement decolorization algorithm based on rolling guided filtering | |
Gajpal et al. | Edge detection technique using hybrid fuzzy logic method | |
Parihar et al. | Dehazing optically haze images with AlexNet-FNN | |
Sat et al. | Object detection and recognition system for pick and place robot | |
Cao | [Retracted] Optimization of Plane Image Color Enhancement Based on Computer Vision | |
Alavi et al. | A novel method for contrast enhancement of gray scale image based on shadowed sets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |