CN108090888A - The infrared image of view-based access control model attention model and the fusion detection method of visible images - Google Patents
The infrared image of view-based access control model attention model and the fusion detection method of visible images Download PDFInfo
- Publication number
- CN108090888A CN108090888A CN201810007446.5A CN201810007446A CN108090888A CN 108090888 A CN108090888 A CN 108090888A CN 201810007446 A CN201810007446 A CN 201810007446A CN 108090888 A CN108090888 A CN 108090888A
- Authority
- CN
- China
- Prior art keywords
- image
- visible images
- interest
- fusion
- targets
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 85
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 238000002156 mixing Methods 0.000 claims abstract description 48
- 238000000034 method Methods 0.000 claims abstract description 46
- 239000003086 colorant Substances 0.000 claims abstract description 29
- 238000013507 mapping Methods 0.000 claims abstract description 22
- 230000000007 visual effect Effects 0.000 claims abstract description 15
- 239000000284 extract Substances 0.000 claims abstract description 8
- 238000000354 decomposition reaction Methods 0.000 claims description 9
- 238000000926 separation method Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 5
- 238000002474 experimental method Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 208000003464 asthenopia Diseases 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003292 diminished effect Effects 0.000 description 1
- 239000000686 essence Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The present invention provides a kind of infrared image of view-based access control model attention model and the fusion detection methods of visible images.This method includes:The infrared image and visible images collected is pre-processed respectively;Visual attention model based on human eye extracts targets of interest from pretreated infrared image and visible images;Using pretreated visible images as background, merged using corresponding pretreated infrared image with carrying out gray level image as the visible images of background, obtain grayscale fusion image;The False color mapping that the targets of interest is carried out to grayscale fusion image marks, and obtains and exports target pseudo-colours blending image.It can solve the problems, such as that interested target of the prior art is weakened in blending image using the present invention, greatly improve the accuracy and reliability of the detection and identification to targets of interest, reduce the difficulty of target identification.
Description
Technical field
This application involves technical field of image processing more particularly to a kind of infrared images of view-based access control model attention model and can
See the fusion detection method of light image, be applicable to the detection of the target to the Weak target in blending image or background complexity
In identification.
Background technology
Infra-red thermal imaging system is mainly that the radiation of target and background is leaned on to generate scene image, but the contrast of infrared image
Difference, and the radiation of target scene is only sensitive to, it is insensitive to the brightness change of scene.Compared with infrared image, it is seen that light image
More target details can be provided, be conducive to eye-observation.It therefore, can be with using infrared with visual image fusion technology
Realize the information redundancy and complementarity fully demonstrated in piece image between the multi-source image that multiple sensors obtain.
In dim targets detection, infrared have that signal-to-noise ratio is low, room and time phase of picture signal with visible images
Closing property is very big and the common traits such as room and time correlation very little of picture noise.The intensity profile of infrared target collects very much
In, and with larger gray value, and target gray is almost submerged in background gray scale in visible images, it is difficult to from background
Middle separation, picture quality are subject to very big challenge.In infrared/visible images of some scenes, if with infrared small object pair
Another source images gray value answered is excessively high or background is excessively complicated, can all influence the identification in fusion results to the target.
For example, as shown in Fig. 1 a~Fig. 1 d, Fig. 1 a are the exemplary plot of the infrared image of sea-surface target, in the infrared image
In, in addition to the one big target of foreground zone and three Small objects are obvious, there is doubtful Small object in prospective area (with circle mark).
But in the prospective area of the visible images in corresponding Fig. 1 b it is the higher background of brightness so that doubtful small in Fig. 1 a
Target is almost flooded in the background in Fig. 1 b, it is difficult to be separated from background.Therefore, such case is to image co-registration result
Validity proposes challenge.
In the prior art, the general gray scale fusion method using based on wavelet transformation or the puppet based on Waxman algorithms are color
Color fusion method merges the infrared image in above-mentioned Fig. 1 a and the visible images of Fig. 1 b, to be merged accordingly
Image, such as the grayscale fusion image of Fig. 1 c and the pseudo-colours blending image of Fig. 1 d.
However, as shown in Fig. 1 c and Fig. 1 d, due to receiving the influence of medium-wave infrared image background, doubting in above-mentioned Fig. 1 a
It is greatly diminished like contrast of the Small object in blending image, it is above-mentioned doubtful especially in Waxman pseudo-colours blending images
Small object is almost flooded in the background completely.If want from above-mentioned grayscale fusion image or pseudo-colours blending image in detection
Doubtful Small object is stated, detection difficulty will also more than the difficulty that above-mentioned doubtful Small object is detected in the image of single source
Greatly.
In summary, image interfusion method of the prior art due to there are it is above-mentioned the shortcomings that and deficiency, it is possible to so that
Target contrast after fusion reduces, so as to significantly increase the detection difficulty to the suspected target in image.
The content of the invention
In view of this, the present invention provides a kind of fusions of the infrared image and visible images of view-based access control model attention model
Detection method so as to solve the problems, such as that interested target of the prior art is weakened in blending image, carries significantly
Height reduces the difficulty of target identification to the accuracy and reliability of the detection and identification of targets of interest.
What technical scheme was specifically realized in:
A kind of infrared image of view-based access control model attention model and the fusion detection method of visible images, this method include:
The infrared image and visible images collected is pre-processed respectively;
Visual attention model based on human eye extracts interest mesh from pretreated infrared image and visible images
Mark;
Using pretreated visible images as background, using corresponding pretreated infrared image with as background can
See that light image carries out gray level image fusion, obtain grayscale fusion image;
The False color mapping that the targets of interest is carried out to grayscale fusion image marks, and obtains and export target pseudo-colours melting
Close image.
Preferably, the visual attention model based on human eye, from pretreated infrared image and visible images
Extracting targets of interest includes:
According to the image sequence being made of multiple pretreated infrared images, the brightness notable figure of still image is generated;
According to the image sequence being made of multiple pretreated infrared images, generation movement notable figure;
The brightness notable figure and movement notable figure according to preset ratio weighting summation are merged, obtain final infrared figure
The characteristic remarkable picture of picture;
It, will be emerging by local auto-adaptive thresholding method according to pixel grey scale similitude and marking area barycenter adjacency
Interesting target and the background separation in the characteristic remarkable picture of the infrared image, and extract the targets of interest in infrared image;
According to the image sequence being made of multiple pretreated visible images, the brightness for generating still image is notable
Figure;
According to the image sequence being made of multiple pretreated visible images, the movement for generating visible images is notable
Figure;
The brightness notable figure and movement notable figure according to preset ratio weighting summation are merged, obtain final visible ray
The characteristic remarkable picture of image;
It, will be emerging by local auto-adaptive thresholding method according to pixel grey scale similitude and marking area barycenter adjacency
Interesting target and the background separation in the characteristic remarkable picture of the visible images, and extract the targets of interest in visible images;
The targets of interest in infrared image and visible images is merged using predetermined fusion rule, in subject fusion image
Obtain final targets of interest.
Preferably, the image sequence that the basis is made of multiple pretreated infrared images, generates still image
Brightness notable figure includes:
Each pretreated infrared image in image sequence establishes image pyramid, to obtain multiple and different points
The infrared image of resolution;
The infrared image of each different resolution is converted and adjusted by difference, obtains multiple feature difference figures;
By standardizing, operator merges the feature difference figure obtained, and the brightness for obtaining final still image is shown
Write figure.
Preferably, the image sequence that the basis is made of multiple pretreated infrared images, generation movement notable figure
Including:
Image sequence according to being made of multiple pretreated infrared images obtains image motion vector;Described image is transported
Moving vector includes:Strength difference, Space Consistency difference and time consistency sex differernce;
According to the strength difference, Space Consistency difference and time consistency sex differernce, generation movement notable figure.
Preferably, the predetermined fusion rule is "or" rule;
It, will be same in the targets of interest region of corresponding visible images and infrared image in the "or" rule
Gray value of the maximum of the gray value of pixel as the pixel in subject fusion image.
Preferably, described using pretreated visible images as background, by corresponding pretreated infrared image with
Visible images as background carry out gray level image fusion, and obtaining grayscale fusion image includes:
Wavelet basis function and Decomposition order are selected, to pretreated visible images and pretreated infrared image point
Multiresolution Decomposition is not carried out, obtains the visible images and infrared image in different scale space;
Picture edge characteristic is extracted from the low frequency component of the visible images in each different scale space and infrared image;
Mixing operation is carried out to the visible images and infrared image in different scale space according to default fusion rule, is obtained
To the multi-resolution representation of blending image, grayscale fusion image is obtained after wavelet inverse transformation.
Preferably, the default fusion rule is to determine that the Weighted Fusion of source images proportion is advised according to gradient and comentropy
Then.
Preferably, the False color mapping that the targets of interest is carried out to grayscale fusion image marks, obtain and export
Target pseudo-colours blending image includes:
By obtained targets of interest in HSV space inverse mapping into the grayscale fusion image, and to the interest mesh
Mark carries out False color mapping mark, obtains target pseudo-colours blending image.
Preferably, False color mapping mark is carried out to the targets of interest to be included:
The targets of interest come out according to source images feature extraction is merged with default color mark in the gray scale
In image.
As above as it can be seen that the infrared image of view-based access control model attention model in the present invention and the fusion detection of visible images
In method, due to first can the visual attention model based on human eye, interest mesh is extracted from pretreated infrared image
Mark, then merges pretreated visible images and infrared image to obtain grayscale fusion image, and then gray scale is merged again
Image carries out the False color mapping mark of the targets of interest, by according to the target that source images feature extraction comes out with specific face
Color marker obtains and exports target pseudo-colours blending image, targets of interest is highlighted without shadow with color so as to reach
The effect of its gray scale background is rung, and then solve that interested target of the prior art is weakened in blending image asks
Topic can greatly improve the accuracy and reliability of the detection and identification to targets of interest, reduce the difficulty of target identification.And
And due to simply carrying out False color mapping mark to the regional area where targets of interest in above-mentioned target pseudo-colours blending image
Note, therefore also cannot be easily caused the visual fatigue of human eye.
Description of the drawings
Fig. 1 a are the exemplary plot of the infrared image of sea-surface target of the prior art.
Fig. 1 b are the exemplary plot of the visible images of sea-surface target of the prior art.
Fig. 1 c are the exemplary plot of the grayscale fusion image based on wavelet transformation of sea-surface target of the prior art.
Fig. 1 d are the exemplary plot of the pseudo-colours blending image based on Waxman algorithms of sea-surface target of the prior art.
Fig. 2 is the exemplary plot of the blending image in the embodiment of the present invention.
Fig. 3 is the infrared image of view-based access control model attention model and the fusion detection of visible images in the embodiment of the present invention
The flow diagram of method.
Fig. 4 is a kind of flow diagram of specific implementation of the step 32 in the embodiment of the present invention.
Fig. 5 is a kind of flow diagram of specific implementation of the step 33 in the embodiment of the present invention.
Fig. 6 a are the exemplary plot of the visible images in a specific experiment in the embodiment of the present invention.
Fig. 6 b are the exemplary plot of the LONG WAVE INFRARED image in a specific experiment in the embodiment of the present invention.
Fig. 6 c are the example of the blending image obtained in a specific experiment in the embodiment of the present invention using WMM methods
Figure.
Fig. 6 d are the example of the blending image obtained in a specific experiment in the embodiment of the present invention using WRE methods
Figure.
Fig. 6 e be the embodiment of the present invention in a specific experiment in using view-based access control model attention model infrared image and
The exemplary plot for the target pseudo-colours blending image that the fusion detection method of visible images obtains.
Fig. 6 f are the infrared image using view-based access control model attention model in a specific experiment in the embodiment of the present invention
The example of the targets of interest information included in the target pseudo-colours blending image obtained with the fusion detection method of visible images
Figure.
Specific embodiment
For technical scheme and advantage is more clearly understood, below in conjunction with drawings and the specific embodiments, to this
Invention is described in further detail.
The shortcomings that in order to overcome image interfusion method of the prior art and deficiency are (for example, as shown in above-mentioned Fig. 1 d
Influence of the high saturation color to Weak target), it is visible to propose a kind of pseudo-colours of view-based access control model attention model in of the invention
Light and infrared image fusion detection method.
Fig. 3 is the infrared image of view-based access control model attention model and the fusion detection of visible images in the embodiment of the present invention
The flow diagram of method.As shown in figure 3, the infrared image and visible ray of the view-based access control model attention model in the embodiment of the present invention
The fusion detection method of image includes step as described below:
Step 31, the infrared image and visible images that collect are pre-processed respectively.
In the inventive solutions, the infrared image imaging device that has been subjected to optical register and visible can be first passed through
Light image imaging device gathers required Low SNR Infrared Images and visible images respectively, then infrared to what is collected again
Image and visible images are pre-processed respectively, to reduce the adverse effects such as noise, distortion, promote the matter of image to be fused
Amount.
In addition, in the inventive solutions, can also use a variety of processing modes to the infrared image that collects and
Visible images are pre-processed respectively.For example, above-mentioned processing mode can include:Gaussian filtering is (for removing in image
Noise) method and geometrical registration (for removing the distortion in image) method the methods of.
Step 32, the visual attention model based on human eye is extracted from pretreated infrared image and visible images
Go out targets of interest.
For example, in this step, vision noticing mechanism (i.e. visual attention model) that can be based on human eye, to easily causing pass
The thermal target (i.e. targets of interest) of note is detected, to be extracted respectively from pretreated infrared image and visible images
Required targets of interest, so as to obtain reliable target information.
In addition, in the inventive solutions, it can realize above-mentioned step 32 using a variety of implementation methods.Below
Technical scheme will be described in detail by taking a kind of realization method therein as an example.
For example, preferably, in one particular embodiment of the present invention, the step 32 can include the steps:
Step 321, according to the image sequence being made of multiple pretreated infrared images, the brightness of still image is generated
Notable figure.
For example, preferably, in one particular embodiment of the present invention, above-mentioned steps 321 can include:
Each pretreated infrared image in image sequence establishes image pyramid (for example, small echo gold word
Tower), to obtain the infrared image of multiple and different resolution ratio;
The infrared image of each different resolution is converted and adjusted by difference, obtains multiple feature difference figures;
By standardizing, operator merges the feature difference figure obtained, and the brightness for obtaining final still image is shown
Write figure.
Step 322, according to the image sequence being made of multiple pretreated infrared images, generation movement notable figure.
For example, preferably, in one particular embodiment of the present invention, above-mentioned steps 322 can include:
Image sequence according to being made of multiple pretreated infrared images obtains image motion vector;Described image is transported
Moving vector includes:Strength difference, Space Consistency difference and time consistency sex differernce;
According to the strength difference, Space Consistency difference and time consistency sex differernce, generation movement notable figure.
Step 323, the brightness notable figure and movement notable figure according to preset ratio weighting summation are merged, obtained final
Infrared image characteristic remarkable picture.
Step 324, according to pixel grey scale similitude and marking area barycenter adjacency, local auto-adaptive Threshold segmentation is passed through
Method by the background separation in the characteristic remarkable picture of targets of interest and the infrared image, and extracts the interest mesh in infrared image
Mark.
Through the above steps 321~324, you can the visual attention model based on human eye, from pretreated infrared figure
Targets of interest is extracted as in.
Step 325, according to the image sequence being made of multiple pretreated visible images, the bright of still image is generated
Spend notable figure.
The specific implementation of the step is same or similar seemingly with step 321, and details are not described herein.
Step 326, according to the image sequence being made of multiple pretreated visible images, visible images are generated
Move notable figure.
The specific implementation of the step is same or similar seemingly with step 322, and details are not described herein.
Step 327, the brightness notable figure and movement notable figure according to preset ratio weighting summation are merged, obtained final
Visible images characteristic remarkable picture.
The specific implementation of the step is same or similar seemingly with step 323, and details are not described herein.
Step 328, according to pixel grey scale similitude and marking area barycenter adjacency, local auto-adaptive Threshold segmentation is passed through
Method by the background separation in the characteristic remarkable picture of targets of interest and the visible images, and is extracted emerging in visible images
Interesting target.
The specific implementation of the step is same or similar seemingly with step 324, and details are not described herein.
Through the above steps 325~328, you can the visual attention model based on human eye, from pretreated visible ray
Targets of interest is extracted in image.
In addition, above-mentioned step 325~328 can synchronously be carried out with above-mentioned step 321~324, it can also be in step
It is performed before or after 321~324, the present invention is not limited this.
Step 329, the targets of interest in infrared image and visible images is merged using predetermined fusion rule, is melted in target
It closes in image and obtains final targets of interest.
For example, preferably, in one particular embodiment of the present invention, the predetermined fusion rule can be "or" rule
Then.In above-mentioned "or" rule, by same pixel in the targets of interest region of corresponding visible images and infrared image
Gray value gray value of the maximum as the pixel in subject fusion image.
Through the above steps 321~329, you can the visual attention model based on human eye, from pretreated infrared figure
Targets of interest is extracted in picture and visible images.
Step 33, using pretreated visible images as background, using corresponding pretreated infrared image and as
The visible images of background carry out gray level image fusion, obtain grayscale fusion image.
In this step, by according to the fusion rule that pre-establishes, wavelet field to pretreated visible images and
Pretreated infrared image carries out mixing operation, to obtain grayscale fusion image.It, can be with when carrying out above-mentioned mixing operation
Pretreated visible images are background, by the visible ray figure of corresponding pretreated infrared image fusion after the pre-treatment
As in, above-mentioned grayscale fusion image is obtained.
In addition, in the inventive solutions, it can realize above-mentioned step 33 using a variety of implementation methods.Below
Technical scheme will be described in detail by taking a kind of realization method therein as an example.
For example, preferably, in one particular embodiment of the present invention, the step 33 can include the steps:
Step 331, wavelet basis function and Decomposition order are selected, to pretreated visible images and pretreated red
Outer image carries out Multiresolution Decomposition respectively, obtains the visible images and infrared image in different scale space.
For example, in the specific preferred embodiment of the present invention, wavelet transform (DWT) method pair can be passed through
Pretreated visible images and pretreated infrared image carry out Multiresolution Decomposition respectively.
Step 332, image is extracted from the low frequency component of the visible images in each different scale space and infrared image
Edge feature.
Step 333, the visible images and infrared image in different scale space are melted according to default fusion rule
Closing operation obtains the multi-resolution representation of blending image, grayscale fusion image is obtained after wavelet inverse transformation.
For example, preferably, in one particular embodiment of the present invention, the default fusion rule can be according to ladder
The indexs such as degree, comentropy determine the Weighted Fusion rule of source images proportion.
Through the above steps 331~333, you can obtain grayscale fusion image.
Step 34, the False color mapping that the targets of interest is carried out to grayscale fusion image marks, and obtains and exports target
Pseudo-colours blending image.
For example, preferably, in one particular embodiment of the present invention, it can be empty in HSV by obtained targets of interest
Between inverse mapping carry out False color mapping mark into the grayscale fusion image, and to the targets of interest, it is pseudo- color to obtain target
Color blending image.
In numerous color spaces, HSV color spaces can reduce the complexity of Color Image Processing, and closer
Understanding and explanation of the human eye to color.Therefore, in above-mentioned preferred embodiment, HSV color spaces can be used to carry out pseudo- color
Color mapping mark.
For another example preferably, in one particular embodiment of the present invention, False color mapping is carried out to the targets of interest
Mark can include:
By the targets of interest come out according to source images feature extraction with default color (for example, red, green etc.)
Mark is in the grayscale fusion image.
Since in the inventive solutions, the grayscale fusion image after fusion is represented with gray scale, and only will be emerging
Interesting target is highlighted with the pseudo-colours for meeting human eye psychology characteristic in the grayscale fusion image, and forms target pseudo-colours
Blending image so that it is more prominent by target during human eye observation's blending image, so as to reach highlighted with color it is emerging
Effect of the interesting target without influencing its gray scale background, and then the accuracy of the detection and identification to targets of interest can be greatly improved
And reliability, reduce the difficulty of target identification.Moreover, because simply to targets of interest institute in above-mentioned target pseudo-colours blending image
Regional area carry out False color mapping mark, therefore also cannot be easily caused the visual fatigue of human eye.
It can be introduced by actual experimental data come the advantageous effects to the present invention.
For example, it can will be melted by actual experiment by using target pseudo-colours obtained from the method in the present invention
Image is closed, with passing through wavelet decomposition modulus maximum (being referred to as WMM methods) and wavelet decomposition regional energy maximum rule (being referred to as WRE methods)
Obtained blending image is compared.
As shown in Fig. 6 a~6f, in the visible images shown in Fig. 6 a, the details of house and trees is somebody's turn to do than more visible
Two smog in visible images have blocked target information completely.And in the LONG WAVE INFRARED image shown in Fig. 6 b, house and
The profile of trees is than more visible, while the targets such as portrait (can be described as target one) in prospect and burner body (can be described as target two)
Information is obvious;But also have because the too low and distant background portrait target (can be described as target three) of contrast.Figure
The exemplary plot of the blending image obtained using WMM methods of the prior art and WRE methods is respectively illustrated in 6c and Fig. 6 d.From
It can be seen that in Fig. 6 c and Fig. 6 d, target two is obvious in blending image;And the visual experience of target one is fainter;By
The influence of smog in visible images, target three are flooded completely.Therefore, if in WMM methods and WRE methods is used to obtain
Blending image in detect more than targets of interest, then target one, two can be detected, but target three is then likely to be lost
It loses.
However, if the fusion using the infrared image and visible images of the view-based access control model attention model in the present invention is examined
Survey method, then can before image co-registration is carried out, first by based on the visual attention model of human eye in infrared image and visible
Targets of interest is extracted in light image, it is hereby achieved that reliable target information;Then further according to the fusion rule pre-established,
Image co-registration operation is carried out in wavelet field;Finally by targets of interest by False color mapping into grayscale fusion image, obtain mesh
Pseudo-colours blending image is marked, for example, as shown in fig 6e.In the target pseudo-colours blending image, three targets of interest are all shown
It writes and protrudes, hence target information becomes more significantly reliable, and above-mentioned target pseudo-colours blending image is also than existing
Grayscale fusion image in technology has stronger expressive force.
For example, in the blending image of Fig. 6 c and Fig. 6 d, occur at the corner in left side house " ghost ".It is this is because red
The reason that outer objects in images edge is relatively fuzzyyer, the energy of marginal point is not enough concentrated.And in above-mentioned Fig. 6 e, it is obtained
Target pseudo-colours blending image effectively improves this phenomenon.In addition, contained in above-mentioned target pseudo-colours blending image
Oriented target information, and the target information is more relatively reliable than detecting acquired results directly in blending image, in Hou Chu
It only needs to do the simple process such as color segmentation in reason and can obtain target information (as shown in Figure 6 f);Moreover, the mesh in the present invention
Pseudo-colours blending image is marked while source images target information and detailed information is effectively retained, also so that whole image seems more
Add clean, fine and smooth, these are based on the reflected advantage of fusion rule institute of edge feature.
Following table 1 represents the objective evaluation result of three kinds of different blending images:
Cross entropy | Structural similarity | Fusion Features entropy | Signal noise ratio (snr) of image | |
WMM methods | 3.8798 | 0.7983 | 19.4839 | 28.3713 |
WRE methods | 3.8366 | 0.8032 | 20.2699 | 29.8120 |
The present invention | 3.8470 | 0.8251 | 21.0753 | 32.5567 |
According to above-mentioned table 1, compared with WMM methods of the prior art and WRE methods, the view-based access control model in the present invention
The infrared image of attention model and the fusion detection method of visible images can greatly improve the detection and knowledge to targets of interest
Other accuracy and reliability reduces the difficulty of target identification.
In conclusion in the inventive solutions, due to first can the visual attention model based on human eye, from pre-
Treated infrared image and targets of interest is extracted in visible images, then by pretreated visible images and infrared
Image co-registration obtains grayscale fusion image, then carries out the False color mapping mark of the targets of interest to grayscale fusion image again
Note by the target come out according to source images feature extraction with specific color mark, obtains and exports target pseudo-colours fusion figure
Picture so as to achieve the effect that highlight targets of interest without influencing its gray scale background with color, and then solves existing
The problem of interested target in technology is weakened in blending image can greatly improve detection and knowledge to targets of interest
Other accuracy and reliability reduces the difficulty of target identification.Moreover, because it is pair in above-mentioned target pseudo-colours blending image
Regional area where targets of interest carries out False color mapping mark, therefore also cannot be easily caused the visual fatigue of human eye.In addition,
Carrying out target detection to infrared image before image co-registration also can effectively avoid another source images (i.e. visible images) to low
Influence of the contrast target in fusion process.Therefore, the infrared image of view-based access control model attention model provided by the present invention and
The fusion detection method of visible images is applicable to the inspection of the target to the Weak target in blending image or background complexity
It surveys in identification, it is more efficient especially for target this method of complex background and low contrast.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
God and any modification, equivalent substitution, improvement and etc. within principle, done, should be included within the scope of protection of the invention.
Claims (9)
1. a kind of infrared image of view-based access control model attention model and the fusion detection method of visible images, which is characterized in that should
Method includes:
The infrared image and visible images collected is pre-processed respectively;
Visual attention model based on human eye extracts targets of interest from pretreated infrared image and visible images;
Using pretreated visible images as background, using corresponding pretreated infrared image and the visible ray as background
Image carries out gray level image fusion, obtains grayscale fusion image;
The False color mapping that the targets of interest is carried out to grayscale fusion image marks, and obtains and exports target pseudo-colours fusion figure
Picture.
2. according to the method described in claim 1, it is characterized in that, the visual attention model based on human eye, from pretreatment
Extracting targets of interest in infrared image and visible images afterwards includes:
According to the image sequence being made of multiple pretreated infrared images, the brightness notable figure of still image is generated;
According to the image sequence being made of multiple pretreated infrared images, generation movement notable figure;
The brightness notable figure and movement notable figure according to preset ratio weighting summation are merged, obtain final infrared image
Characteristic remarkable picture;
According to pixel grey scale similitude and marking area barycenter adjacency, by local auto-adaptive thresholding method, by interest mesh
Mark and the background separation in the characteristic remarkable picture of the infrared image, and extract the targets of interest in infrared image;
According to the image sequence being made of multiple pretreated visible images, the brightness notable figure of still image is generated;
According to the image sequence being made of multiple pretreated visible images, the movement notable figure of visible images is generated;
The brightness notable figure and movement notable figure according to preset ratio weighting summation are merged, obtain final visible images
Characteristic remarkable picture;
According to pixel grey scale similitude and marking area barycenter adjacency, by local auto-adaptive thresholding method, by interest mesh
Mark and the background separation in the characteristic remarkable picture of the visible images, and extract the targets of interest in visible images;
The targets of interest in infrared image and visible images is merged using predetermined fusion rule, is obtained in subject fusion image
Final targets of interest.
3. according to the method described in claim 2, it is characterized in that, the basis is made of multiple pretreated infrared images
Image sequence, generating the brightness notable figure of still image includes:
Each pretreated infrared image in image sequence establishes image pyramid, to obtain multiple and different resolution ratio
Infrared image;
The infrared image of each different resolution is converted and adjusted by difference, obtains multiple feature difference figures;
By standardizing, operator merges the feature difference figure obtained, and the brightness for obtaining final still image is notable
Figure.
4. according to the method described in claim 2, it is characterized in that, the basis is made of multiple pretreated infrared images
Image sequence, generation movement notable figure include:
Image sequence according to being made of multiple pretreated infrared images obtains image motion vector;Described image move to
Amount includes:Strength difference, Space Consistency difference and time consistency sex differernce;
According to the strength difference, Space Consistency difference and time consistency sex differernce, generation movement notable figure.
5. according to the method described in claim 2, it is characterized in that:
The predetermined fusion rule is "or" rule;
In the "or" rule, by same pixel in the targets of interest region of corresponding visible images and infrared image
Gray value gray value of the maximum as the pixel in subject fusion image.
6. according to the method described in claim 1, it is characterized in that, described using pretreated visible images as background, general
Corresponding pretreated infrared image is merged with carrying out gray level image as the visible images of background, obtains gray scale fusion figure
As including:
Select wavelet basis function and Decomposition order, to pretreated visible images and pretreated infrared image respectively into
Row Multiresolution Decomposition obtains the visible images and infrared image in different scale space;
Picture edge characteristic is extracted from the low frequency component of the visible images in each different scale space and infrared image;
Mixing operation is carried out to the visible images and infrared image in different scale space according to default fusion rule, is melted
The multi-resolution representation of image is closed, grayscale fusion image is obtained after wavelet inverse transformation.
7. according to the method described in claim 6, it is characterized in that:
The default fusion rule is the Weighted Fusion rule that source images proportion is determined according to gradient and comentropy.
8. according to the method described in claim 1, it is characterized in that, described carry out the targets of interest to grayscale fusion image
False color mapping marks, and obtains and export target pseudo-colours blending image including:
By obtained targets of interest in HSV space inverse mapping into the grayscale fusion image, and to the targets of interest into
Row False color mapping marks, and obtains target pseudo-colours blending image.
9. according to the method described in claim 8, it is characterized in that, False color mapping mark bag is carried out to the targets of interest
It includes:
By the targets of interest come out according to source images feature extraction with default color mark in the grayscale fusion image
In.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810007446.5A CN108090888B (en) | 2018-01-04 | 2018-01-04 | Fusion detection method of infrared image and visible light image based on visual attention model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810007446.5A CN108090888B (en) | 2018-01-04 | 2018-01-04 | Fusion detection method of infrared image and visible light image based on visual attention model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108090888A true CN108090888A (en) | 2018-05-29 |
CN108090888B CN108090888B (en) | 2020-11-13 |
Family
ID=62179971
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810007446.5A Active CN108090888B (en) | 2018-01-04 | 2018-01-04 | Fusion detection method of infrared image and visible light image based on visual attention model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108090888B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146904A (en) * | 2018-08-13 | 2019-01-04 | 合肥英睿系统技术有限公司 | The method and apparatus of infrared image object profile is shown in visible images |
CN109470664A (en) * | 2018-09-27 | 2019-03-15 | 中国船舶工业系统工程研究院 | A kind of system and recognition methods of quickly contactless identification sea film |
CN110322423A (en) * | 2019-04-29 | 2019-10-11 | 天津大学 | A kind of multi-modality images object detection method based on image co-registration |
CN111191574A (en) * | 2019-12-26 | 2020-05-22 | 新绎健康科技有限公司 | Method and device for acquiring viscera partition temperature of facial examination |
CN111208521A (en) * | 2020-01-14 | 2020-05-29 | 武汉理工大学 | Multi-beam forward-looking sonar underwater obstacle robust detection method |
CN111325139A (en) * | 2020-02-18 | 2020-06-23 | 浙江大华技术股份有限公司 | Lip language identification method and device |
CN111345026A (en) * | 2018-08-27 | 2020-06-26 | 深圳市大疆创新科技有限公司 | Image processing and presentation |
CN111833282A (en) * | 2020-06-11 | 2020-10-27 | 毛雅淇 | Image fusion method based on improved DDcGAN model |
CN112233024A (en) * | 2020-09-27 | 2021-01-15 | 昆明物理研究所 | Medium-long wave dual-waveband infrared image fusion method based on difference characteristic color mapping |
CN112308102A (en) * | 2019-08-01 | 2021-02-02 | 北京易真学思教育科技有限公司 | Image similarity calculation method, calculation device, and storage medium |
US11017515B2 (en) | 2019-02-06 | 2021-05-25 | Goodrich Corporation | Thermal image warm-target detection and outline formation |
CN112916407A (en) * | 2020-04-29 | 2021-06-08 | 江苏旷博智能技术有限公司 | Method for sorting coal and gangue |
CN113409232A (en) * | 2021-06-16 | 2021-09-17 | 吉林大学 | Bionic false color image fusion model and method based on sidewinder visual imaging |
CN114419312A (en) * | 2022-03-31 | 2022-04-29 | 南京智谱科技有限公司 | Image processing method and device, computing equipment and computer readable storage medium |
CN115100193A (en) * | 2022-08-23 | 2022-09-23 | 南京天朗防务科技有限公司 | Weak and small target detection and identification method and device based on infrared and visible light images |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1822046A (en) * | 2006-03-30 | 2006-08-23 | 上海电力学院 | Infrared and visible light image fusion method based on regional property fuzzy |
CN101872473A (en) * | 2010-06-25 | 2010-10-27 | 清华大学 | Multiscale image natural color fusion method and device based on over-segmentation and optimization |
US8775597B2 (en) * | 2011-05-26 | 2014-07-08 | Eci Telecom Ltd. | Technique for management of communication networks |
CN104700381A (en) * | 2015-03-13 | 2015-06-10 | 中国电子科技集团公司第二十八研究所 | Infrared and visible light image fusion method based on salient objects |
-
2018
- 2018-01-04 CN CN201810007446.5A patent/CN108090888B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1822046A (en) * | 2006-03-30 | 2006-08-23 | 上海电力学院 | Infrared and visible light image fusion method based on regional property fuzzy |
CN101872473A (en) * | 2010-06-25 | 2010-10-27 | 清华大学 | Multiscale image natural color fusion method and device based on over-segmentation and optimization |
US8775597B2 (en) * | 2011-05-26 | 2014-07-08 | Eci Telecom Ltd. | Technique for management of communication networks |
CN104700381A (en) * | 2015-03-13 | 2015-06-10 | 中国电子科技集团公司第二十八研究所 | Infrared and visible light image fusion method based on salient objects |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146904A (en) * | 2018-08-13 | 2019-01-04 | 合肥英睿系统技术有限公司 | The method and apparatus of infrared image object profile is shown in visible images |
CN111345026A (en) * | 2018-08-27 | 2020-06-26 | 深圳市大疆创新科技有限公司 | Image processing and presentation |
US11778338B2 (en) | 2018-08-27 | 2023-10-03 | SZ DJI Technology Co., Ltd. | Image processing and presentation |
US11212436B2 (en) | 2018-08-27 | 2021-12-28 | SZ DJI Technology Co., Ltd. | Image processing and presentation |
CN109470664A (en) * | 2018-09-27 | 2019-03-15 | 中国船舶工业系统工程研究院 | A kind of system and recognition methods of quickly contactless identification sea film |
US11017515B2 (en) | 2019-02-06 | 2021-05-25 | Goodrich Corporation | Thermal image warm-target detection and outline formation |
CN110322423B (en) * | 2019-04-29 | 2023-03-31 | 天津大学 | Multi-modal image target detection method based on image fusion |
CN110322423A (en) * | 2019-04-29 | 2019-10-11 | 天津大学 | A kind of multi-modality images object detection method based on image co-registration |
CN112308102A (en) * | 2019-08-01 | 2021-02-02 | 北京易真学思教育科技有限公司 | Image similarity calculation method, calculation device, and storage medium |
CN111191574A (en) * | 2019-12-26 | 2020-05-22 | 新绎健康科技有限公司 | Method and device for acquiring viscera partition temperature of facial examination |
CN111208521A (en) * | 2020-01-14 | 2020-05-29 | 武汉理工大学 | Multi-beam forward-looking sonar underwater obstacle robust detection method |
CN111325139A (en) * | 2020-02-18 | 2020-06-23 | 浙江大华技术股份有限公司 | Lip language identification method and device |
CN111325139B (en) * | 2020-02-18 | 2023-08-04 | 浙江大华技术股份有限公司 | Lip language identification method and device |
CN112916407A (en) * | 2020-04-29 | 2021-06-08 | 江苏旷博智能技术有限公司 | Method for sorting coal and gangue |
CN111833282B (en) * | 2020-06-11 | 2023-08-04 | 毛雅淇 | Image fusion method based on improved DDcGAN model |
CN111833282A (en) * | 2020-06-11 | 2020-10-27 | 毛雅淇 | Image fusion method based on improved DDcGAN model |
CN112233024A (en) * | 2020-09-27 | 2021-01-15 | 昆明物理研究所 | Medium-long wave dual-waveband infrared image fusion method based on difference characteristic color mapping |
CN112233024B (en) * | 2020-09-27 | 2023-11-03 | 昆明物理研究所 | Medium-long wave double-band infrared image fusion method based on difference characteristic color mapping |
CN113409232A (en) * | 2021-06-16 | 2021-09-17 | 吉林大学 | Bionic false color image fusion model and method based on sidewinder visual imaging |
CN113409232B (en) * | 2021-06-16 | 2023-11-10 | 吉林大学 | Bionic false color image fusion model and method based on croaker visual imaging |
CN114419312B (en) * | 2022-03-31 | 2022-07-22 | 南京智谱科技有限公司 | Image processing method and device, computing equipment and computer readable storage medium |
CN114419312A (en) * | 2022-03-31 | 2022-04-29 | 南京智谱科技有限公司 | Image processing method and device, computing equipment and computer readable storage medium |
CN115100193A (en) * | 2022-08-23 | 2022-09-23 | 南京天朗防务科技有限公司 | Weak and small target detection and identification method and device based on infrared and visible light images |
CN115100193B (en) * | 2022-08-23 | 2022-11-25 | 南京天朗防务科技有限公司 | Weak and small target detection and identification method and device based on infrared and visible light images |
Also Published As
Publication number | Publication date |
---|---|
CN108090888B (en) | 2020-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108090888A (en) | The infrared image of view-based access control model attention model and the fusion detection method of visible images | |
Zhang et al. | A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application | |
Zhu et al. | A fast single image haze removal algorithm using color attenuation prior | |
CN105518709B (en) | The method, system and computer program product of face for identification | |
CN103942577B (en) | Based on the personal identification method for establishing sample database and composite character certainly in video monitoring | |
CN104408482B (en) | A kind of High Resolution SAR Images object detection method | |
CN103729854B (en) | A kind of method for detecting infrared puniness target based on tensor model | |
CN106846289A (en) | A kind of infrared light intensity and polarization image fusion method based on conspicuousness migration with details classification | |
CN109102003A (en) | A kind of small target detecting method and system based on Infrared Physics Fusion Features | |
Lian et al. | A novel method on moving-objects detection based on background subtraction and three frames differencing | |
Hsieh et al. | Fast and robust infrared image small target detection based on the convolution of layered gradient kernel | |
Miller et al. | Person tracking in UAV video | |
Tian et al. | Pedestrian detection based on laplace operator image enhancement algorithm and faster R-CNN | |
CN112861588B (en) | Living body detection method and device | |
CN105184245B (en) | A kind of crowd density estimation method of multiple features fusion | |
Yao et al. | Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector | |
Yao et al. | A novel method for real-time multiple moving targets detection from moving IR camera | |
Dong et al. | FusionPID: A PID control system for the fusion of infrared and visible light images | |
Wu et al. | Spectra-difference based anomaly-detection for infrared hyperspectral dim-moving-point-target detection | |
Yao et al. | Small infrared target detection based on spatio-temporal fusion saliency | |
Jiang et al. | Fusion evaluation of X-ray backscatter image and holographic subsurface radar image | |
CN110502968A (en) | The detection method of infrared small dim moving target based on tracing point space-time consistency | |
Sang et al. | Multiscale centerline extraction of angiogram vessels using Gabor filters | |
Liu et al. | A Generative Adversarial Network for infrared and visible image fusion using adaptive dense generator and Markovian discriminator | |
Li et al. | Contrast and distribution based saliency detection in infrared images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |