CN110210541A - Image interfusion method and equipment, storage device - Google Patents

Image interfusion method and equipment, storage device Download PDF

Info

Publication number
CN110210541A
CN110210541A CN201910436319.1A CN201910436319A CN110210541A CN 110210541 A CN110210541 A CN 110210541A CN 201910436319 A CN201910436319 A CN 201910436319A CN 110210541 A CN110210541 A CN 110210541A
Authority
CN
China
Prior art keywords
image
weight
fusion
visible images
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910436319.1A
Other languages
Chinese (zh)
Other versions
CN110210541B (en
Inventor
李乾坤
卢维
殷俊
张兴明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201910436319.1A priority Critical patent/CN110210541B/en
Publication of CN110210541A publication Critical patent/CN110210541A/en
Application granted granted Critical
Publication of CN110210541B publication Critical patent/CN110210541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching

Abstract

This application discloses a kind of image interfusion method and equipment, storage device.Wherein, image interfusion method includes: the visible images and black light image for obtaining and shooting to same target scene;First fusion is carried out to visible images and black light image, obtains original fusion image;The first edge information of original fusion image and the second edge information of visible images are extracted respectively;Compare first edge information and second edge information, and determines the fusion weight of original fusion image and visible images respectively based on comparative result;Based on fusion weight, the second fusion is carried out to original fusion image and visible images, obtains final blending image.Above scheme can make final blending image retain target scene information as much as possible.

Description

Image interfusion method and equipment, storage device
Technical field
This application involves technical field of image processing, more particularly to a kind of image interfusion method and equipment, storage device.
Background technique
Image co-registration (Image Fusion) is image with specific algorithm that multiple image synthesis is new at a width.Mesh Before, image fusion technology is in remote sensing, safety navigation, medical image analysis, anti-terrorism inspection, environmental protection, Traffic monitoring, calamity Feelings detection suffers from great application value with fields such as forecast.
Image fusion technology mainly utilizes complementarity of the multiple image in the correlation and information on space-time, and target is to make Obtained final blending image, which has target scene, more comprehensively, clearly to be described, thus be conducive to eye recognition or machine from Dynamic detection.In view of this, how to make final blending image retain information as much as possible, to as far as possible comprehensively, clearly retouch Target scene is stated, urgent problem to be solved is become.
Summary of the invention
The application can make mainly solving the technical problems that provide a kind of image interfusion method and equipment, storage device Final blending image retains target scene information as much as possible.
To solve the above-mentioned problems, the application first aspect provides a kind of image interfusion method, comprising: obtains to same The visible images and black light image that target scene is shot;First is carried out to visible images and black light image Fusion, obtains original fusion image;The first edge information of original fusion image and the second side of visible images are extracted respectively Edge information;Compare first edge information and second edge information, and determines original fusion image and can respectively based on comparative result The fusion weight of light-exposed image;Based on fusion weight, the second fusion is carried out to original fusion image and visible images, is obtained most Whole blending image.
To solve the above problems, the application second aspect provides a kind of image co-registration equipment, comprising: what is be mutually coupled deposits Reservoir and processor;Processor is used to execute the program instruction of memory storage, to realize the image co-registration of above-mentioned first aspect Method.
To solve the above problems, the application third aspect provides a kind of image co-registration equipment, including obtain module, first Fusion Module, edge extracting module, weight determination module and the second Fusion Module obtain module for obtaining to same target field The visible images and black light image that scape is shot;First Fusion Module is used for visible images and black light figure As carrying out the first fusion, original fusion image is obtained;Edge extracting module for extracting the first side of original fusion image respectively The second edge information of edge information and visible images;Weight determination module is believed for comparing first edge information and second edge Breath, and determine the fusion weight of original fusion image and visible images respectively based on comparative result;Second Fusion Module is used for Based on fusion weight, the second fusion is carried out to original fusion image and visible images, obtains final blending image.
To solve the above problems, the application fourth aspect provides a kind of storage device, being stored thereon with can be processed Device operation program instruction, program instruction for realizing above-mentioned first aspect image interfusion method.
In above scheme, the is carried out based on the visible images shot to same target scene and black light image One fusion, obtains original fusion image, will carry out the obtained first edge information of edge extracting to original fusion image and to can Light-exposed image carries out the second edge information that edge extracting obtains and is compared, determine original fusion image and visible images into Fusion weight when row second merges, the fusion weight being based ultimately upon carry out the to original fusion image and visible images Two fusions, to obtain final blending image.By the above-mentioned means, original fusion image remain visible images with can not On the basis of the complementary information of light-exposed image, the marginal information for being again based on original fusion image and visible images is weighted It handles and remains local feature information, so that final blending image retains target scene information as much as possible.
Detailed description of the invention
Fig. 1 is the flow diagram of one embodiment of the application image interfusion method;
Fig. 2 is the processing flow schematic diagram of one embodiment of the application image interfusion method;
Fig. 3 is the flow diagram of mono- embodiment of step S12 in Fig. 1;
Fig. 4 is the flow diagram of mono- embodiment of step S121 in Fig. 3;
Fig. 5 is the flow diagram of mono- embodiment of step S14 in Fig. 1;
Fig. 6 is the flow diagram of mono- embodiment of step S52 in Fig. 5;
Fig. 7 is the flow diagram of mono- embodiment of step S15 in Fig. 1;
Fig. 8 is the flow diagram of mono- embodiment of step S151 in Fig. 7;
Fig. 9 is the block schematic illustration of one embodiment of the application image co-registration equipment;
Figure 10 is the block schematic illustration of one embodiment of the application storage device;
Figure 11 is the block schematic illustration of another embodiment of the application image co-registration equipment.
Specific embodiment
With reference to the accompanying drawings of the specification, the scheme of the embodiment of the present application is described in detail.
In being described below, for illustration and not for limitation, propose such as specific system structure, interface, technology it The detail of class, so as to provide a thorough understanding of the present application.
The terms " system " and " network " are often used interchangeably herein.The terms "and/or", only It is a kind of incidence relation for describing affiliated partner, indicates may exist three kinds of relationships, for example, A and/or B, can indicates: individually There are A, exist simultaneously A and B, these three situations of individualism B.In addition, character "/" herein, typicallys represent forward-backward correlation pair As if a kind of relationship of "or".In addition, " more " expressions two or more than two herein.
Referring to Fig. 1, Fig. 1 is the flow diagram of one embodiment of the application image interfusion method.Specifically, can wrap It includes:
Step S11: the visible images and black light image shot to same target scene are obtained.
Visible images refer to picture pick-up device visible light wave segment limit really feel survey target scene to the reflection of light to Image the image formed.Visible images can sufficiently reflect the color situation of target scene.It but in practical projects, can Light-exposed image loses some detailed information vulnerable to natural environment influences such as light.
Black light image can be picture pick-up device and sense and obtain under irradiation of the black light light source to target scene 's.Black light refers to the electromagnetic wave that can not be perceived by the human eye outside visible light, such as radio wave, infrared ray, ultraviolet light, human relations Qin ray, gamma ray etc..In an implement scene, black light image can be infrared image, specifically, can be with Being picture pick-up device senses obtained image down near infrared ray (Near Infrared, NIR) irradiation.
Picture pick-up device can obtain the visible images of target scene by its own camera lens and sensor (sensor) With black light image, such as black light camera.In an implement scene, picture pick-up device can integrate single-lens dual sensor (sensor) or twin-lens dual sensor (sensor), two sensors (sensor) be respectively used to acquisition visible images and Black light image.It, can also be by connecting with the picture pick-up device for having above-mentioned " dual sensor " in an implement scene Mode obtains visible images and black light image that picture pick-up device is shot, for example, can be assisted based on real-time streaming transport View (RSTP, Real Time Streaming Protocol) obtains obtained visible images of picture pick-up device captured in real-time and not Visible images.In other implement scenes, it can also be taken the photograph by offline mode, such as movable storage medium, acquisition The visible images and black light image, the present embodiment shot as device are not particularly limited herein.
Further, it is also possible to the image pick-up device for controlling the picture pick-up device that can shoot visible images and black light image being shot Part carries out shooting to same target scene respectively to obtain visible images and black light image.
Step S12: the first fusion is carried out to visible images and black light image, obtains original fusion image.
Visible images can more fully reflect the color situation of target scene, but vulnerable to light etc. from Right condition influences and loses some detailed information, and the imaging without visible images is not vulnerable to the influence of available light, thus logical It crosses and the initial pictures that the first fusion obtains are carried out to visible images and black light image can retain visible images and not The complementary information of visible images both sides.
Step S13: the first edge information of original fusion image and the second edge information of visible images are extracted respectively.
Edge detection algorithm is depended on to the extraction of original fusion image and visible images marginal information.Edge detection It is the basic problem in image procossing and computer vision field, the purpose of edge detection is brightness change in reference numbers image Apparent point.
Currently, the method for edge detection is broadly divided into three categories: the first kind is the local operation's method fixed based on certain, Such as: the differential method, fitting process;Second class is that the global extracting method of criterion is minimised as with ability, it is characterized in that with stringent Mathematical method this problem is analyzed, one-dimensional value cost function is as optimal extraction foundation, from the sight of global optimum Point extracts edge, such as method of relaxation, neural network analysis method;Third class is close with wavelet transformation, mathematical morphology, fractal theory etc. The new and high technology to grow up over year is the image edge extraction method of representative.
In practical projects, edge detection can utilize Roberts operator, Sobel operator, Prewitt operator and Krisch The specific implementation of the differential operators such as operator, can also be implemented certainly using Canny operator.It is ability about edge detection algorithm The prior art in domain, this is no longer going to repeat them for the present embodiment.
Step S14: comparing first edge information and second edge information, and determines original fusion respectively based on comparative result The fusion weight of image and visible images.
The second edge information of first edge information and visible images based on the original fusion image got carries out Compare, so that it is determined that the fusion weight of original fusion image and visible images, in turn, the fusion weight and original fusion image There is directly related property with the marginal informations of visible images so that it is subsequent based on fusion weight to original fusion image and visible When light image carries out the second fusion, the local feature information of the two can be retained.
Step S15: based on fusion weight, the second fusion is carried out to original fusion image and visible images, is obtained final Blending image.
Aforesaid way carries out first based on the visible images shot to same target scene and black light image Fusion, obtains original fusion image, by the first edge information obtained to original fusion image progress edge extracting and to visible Light image carries out the second edge information that edge extracting obtains and is compared, and determines that original fusion image and visible images carry out Fusion weight when the second fusion, the fusion weight being based ultimately upon carry out second to original fusion image and visible images Fusion, to obtain final blending image.By the above-mentioned means, original fusion image remain visible images with it is invisible On the basis of the complementary information of light image, the marginal information for being again based on original fusion image and visible images is done at weighting It manages and remains local feature information, so that final blending image retains target scene information as much as possible, to the greatest extent may be used Can comprehensively, target scene is clearly described.
Processing flow schematic diagram below in conjunction with one embodiment of the application image interfusion method shown in 2 and other Flow diagram illustrates the implementation steps of the application image interfusion method.
First aspect:
First aspect will be explained in detail in the above embodiments of the present application step S12 to visible images and black light image The first fusion is carried out, the specific implementation step of original fusion image is obtained.
Fig. 3 is please referred to, Fig. 3 is the flow diagram of step S12 in Fig. 1.Specifically, may include:
Step S121: the color information of visible images and the brightness letter of luminance information and black light image are extracted Breath.
In an implement scene, before step S121, can also include the visible images that will extract with can not Light-exposed image is registrated.Specifically, image registration is exactly for two images, by finding a kind of spatial alternation a width Image is mapped to another piece image, so that corresponding corresponding to the point of space same position in two figures.Image registration Method can substantially be divided into three classes: the first kind is based on gray scale and template, and such methods directly adopt related operation mode meter Correlation is calculated to seek best match position, template matching (Blocking Matching) is according to known template image to separately Subgraph similar with template image is found in one image, the matching algorithm based on gray scale is also referred to as correlation matching algorithm, with sky Between two-dimentional sleiding form matched, there are commonly MAD algorithm, absolute error and algorithms, accidentally in first kind method Poor quadratic sum algorithm, mean error quadratic sum algorithm etc.;Second class is feature-based matching method, such as optical flow method, Harr-like method etc.;Third class is method, such as Walsh transformation, wavelet transformation etc. based on domain transformation.About figure The method of picture registration is the prior art in this field, and this is no longer going to repeat them for the present embodiment.
Specifically, referring to Fig. 4, step S121 can be implemented as follows in the present embodiment:
Step S41: it will be seen that light image and black light image are transformed into preset color space respectively.
Preset color space can use HSI (Hue-Saturation-Intensity, hue-saturation-brightness) face Color model, or YUV color model can also be used.Wherein, when preset color space uses HSI color model, what H was indicated The saturation table that tone and D are indicated is shown as the color information of image, and the intensity or brightness that I is indicated are expressed as the brightness letter of image Breath;When preset color space uses YUV color model, the brightness that Y is indicated is expressed as the luminance information of image, what U was indicated The concentration that coloration and V are indicated is expressed as the color information of image.
As shown in Fig. 2, image P1 indicates that visible images, image P2 indicate black light image, corresponding resolution ratio is 800*600ppi.It will be seen that light image P1 and black light image P2 is converted to YUV color model, it should be noted that Before this, it is seen that registration is completed in light image P1 and black light image P2.
Step S42: respectively by the luminance component and color component in the visible images and black light image after conversion Separation, obtains the color component of visible images and the luminance component of luminance component and black light image.
For example, when visible images P1 and black light image P2 are converted to YUV color model, then it will be seen that light figure As luminance component (i.e. Y-component) and color component (i.e. U, V component) separation of P1 and black light image P2, to obtain visible (Y points of luminance component of the color component (U, V component) and luminance component (Y-component) of light image P1 and black light image P2 Amount).
Step S122: it will be seen that the luminance information of light image replaces with the luminance information of black light, and by visible light figure The color information of picture and replaced luminance information form original fusion image.
Such as, it will be seen that the luminance information (i.e. Y-component) of light image P1 replaces with the luminance information of black light image P2 (i.e. Y-component), and it is (i.e. invisible by the color information of visible images P1 (i.e. U, V component) and replaced luminance information The luminance component of light image P2: Y-component) composition original fusion image P3.
Aforesaid way, since visible images can more fully reflect the color situation of target scene, but vulnerable to The effect of natural conditions of light etc. and lose some detailed information, such as brightness case, without visible images imaging not Vulnerable to the influence of available light, the luminance information of black light is replaced with by will be seen that the luminance information of light image, and by can The color information of light-exposed image and replaced luminance information form original fusion image, so that original fusion image is protected Preferably color information has been stayed, and has provided data basis for the subsequent optimization luminance information that continues.
Second aspect:
Second aspect will be explained in detail in the above embodiments of the present application step S13 and extract the first of original fusion image respectively The specific implementation step of marginal information and the second edge information of visible images.
Fig. 2 is please referred to, step S13 can specifically include in Fig. 1: using default Boundary extracting algorithm, to initially melting It closes image and visible images carries out edge extracting, correspondence obtains first edge image and second edge image, wherein the first side It include first edge information in edge image, second edge image includes second edge information.
Default Boundary extracting algorithm can be calculated as in the foregoing embodiment for Roberts operator, Sobel operator, Prewitt The differential operators such as son and Krisch operator, can also be other operators such as Canny operator.
As shown in Fig. 2, sampling default Boundary extracting algorithm to original fusion image P3 carries out edge extracting, the first side is obtained Edge image P4 samples default Boundary extracting algorithm to visible images and carries out edge extracting, obtains second edge image P5, and this When keep original resolution 800*600ppi it is constant.
In addition, first edge image P4 includes first edge information, second edge image P5 includes second edge information.
The third aspect:
The third aspect, which will be explained in detail in the above embodiments of the present application step S14, compares first edge information and second edge Information, and determine the specific implementation step of the fusion weight of original fusion image and visible images respectively based on comparative result.
Specifically, in Fig. 1 step S14 may include: compare first edge information and second edge information, and based on than Relatively result determines that N group merges weight W, wherein every group of fusion weight WkIncluding a corresponding resolution ratio RkFirst fusion weight W1k With the second fusion weight W2k
Specifically, first edge information includes each pixel p in original fusion image(i, j)First edge characteristic value; Second edge information includes each pixel p in visible images(i, j)Second edge characteristic value.For Fig. 2, i.e. the first side Edge information includes each pixel p in original fusion image P3(i, j)First edge characteristic value, second edge information includes visible light Each pixel p in image P1(i, j)Second edge characteristic value.
In conjunction with refering to figure Fig. 2 and Fig. 5, above-mentioned steps " compare first edge information and second edge information, and are based on comparing As a result determine that N group merges weight W, wherein every group of fusion weight WkIncluding a corresponding resolution ratio RkFirst fusion weight W1kWith Second fusion weight W2k", it can specifically be implemented as follows:
Step S51: respectively by corresponding pixel points p in first edge information and second edge information(i, j)Edge feature value It is compared, obtains each pixel p(i, j)Characteristic value comparison result.
Fig. 2 is please referred to, is included by the first edge image P4 that original fusion image P3 progress edge extracting obtains First edge information and visible images P1 carry out the second side that the obtained second edge image P5 of edge extracting is included Corresponding pixel points p in edge information(i, j)Edge feature value be compared, for example, by first edge image P4 corresponding pixel points p The edge feature value of (1,1) and the edge feature value of second edge image P5 corresponding pixel points p (1,1) are compared, by first The side of the edge feature value of edge image P4 corresponding pixel points p (1,2) pixel p (1,2) corresponding with second edge image P5 Edge characteristic value is compared, and so on, until completing the comparison of all pixels point.
Step S52: according to each pixel P(i, f)Characteristic value comparison result, determine each pixel in original fusion image p(i, j)The first sub- weight and visible images in each pixel p(i, j)The second sub- weight, wherein each pixel p(i, j)? One sub- weight and the second sub- weight separately constitute the first sub- weight sets and the second sub- weight sets of corresponding original resolution.
According to each pixel p(i, j)Characteristic value comparison result, determine each pixel p in original fusion image(i, j)? Each pixel p in one sub- weight and visible images(i, j)The second sub- weight, at this point, each pixel p(i, j)The first sub- weight The the first sub- weight sets and the second sub- weight sets of corresponding original resolution are separately constituted with the second sub- weight.
Specifically, referring to Fig. 6, step S52 can be implemented as follows:
Step S521: judge pixel p(i, j)Whether corresponding first edge characteristic value is not less than second edge characteristic value, If so, S522 is thened follow the steps, if it is not, thening follow the steps S523.
Fig. 2 is please referred to, for example, successively judging whether the corresponding first edge characteristic value of pixel p (1,1) is not less than Second edge characteristic value, judges whether the corresponding first edge characteristic value of pixel p (1,2) is not less than second edge characteristic value, And so on, until completing the comparison of all pixels point corresponding first edge characteristic value and second edge characteristic value.
Step S522: by pixel p in original fusion image(i, j)The first sub- weight be set as the first default weighted value, will Pixel p in visible images(i, j)The second sub- weight be set as the second default weighted value.
Continuing with Fig. 2 is combined, for example, the corresponding first edge characteristic value of p (1,1) is not less than second edge characteristic value, then The first sub- weight of pixel p (1,1) in original fusion image P3 is set as the first default weighted value, the first default weighted value It can be 1.In an implement scene, the first default weighted value may be other positive numbers less than 1, such as: 0.9,0.8 etc. Deng.In another implement scene, the first default weighted value is also possible to the positive number greater than 1.And it will be seen that picture in light image P1 The second sub- weight of vegetarian refreshments p (1,1) is set as the second default weighted value, and the second default weighted value can be 0.In an implement scene In, the second default weighted value may be other positive numbers less than 1, such as: 0.1,0.2 etc..In an implement scene, the One default weighted value is greater than the second default one preset threshold of weighted value, such as 0.5, in specific value the present embodiment of preset threshold With no restrictions.
Step S523: by pixel p in original fusion image(i, j)The first sub- weight be set as the second default weighted value, will Pixel p in visible images(i, j)The second sub- weight be set as the first default weighted value.
Please continue to refer to Fig. 2, for example, the corresponding first edge characteristic value of p (1,1) is less than second edge characteristic value, then will The first sub- weight of pixel p (1,1) in original fusion image P3 is set as the second default weighted value, and the second default weighted value can Think 0.In an implement scene, the second default weighted value may be other positive numbers less than 1, such as: 0.1,0.2 etc. Deng.In another implement scene, the second default weighted value is also possible to the positive number greater than 1.And it will be seen that picture in light image P1 The second sub- weight of vegetarian refreshments p (1,1) is set as the first default weighted value, and the first default weighted value can be 1.In an implement scene In, the first default weighted value may be other positive numbers less than 1, such as: 0.9,0.1 etc..In an implement scene, the One default weighted value is greater than the second default one preset threshold of weighted value, such as 0.5, in specific value the present embodiment of preset threshold With no restrictions.
In an implement scene, the sum of the first default weighted value and the second default weighted value are 1.In another implementation field Jing Zhong, the sum of the first default weighted value and the second default weighted value can not also be 1, for example, 2,3,4,5 etc..It is corresponding Ground can be carried out at subsequent weight when the sum of the first default weighted value and the second default weighted value are 1 using weighted sum Reason;When the sum of the first default weighted value and the second default weighted value are not 1, can be carried out at subsequent weight using weighted average Reason.
Each pixel p(i, j)The first sub- weight and the second sub- weight separately constitute corresponding original resolution first son power Collect W again11With the second sub- weight sets W21
Step S53: it is down-sampled using first sub- weight sets progress of the first default sampling policy to corresponding original resolution, N-1 the first sub- weight sets of corresponding different resolution are obtained, and using the first default sampling policy to corresponding original resolution The second sub- weight sets carry out down-sampled, obtain N-1 the second sub- weight sets of corresponding different resolution.
First default sampling policy can be the down-sampled algorithm of Gauss, please refer to Fig. 2, and sampling Gauss is down-sampled to right Answer original resolution R1The first sub- weight sets W11When carrying out down-sampled, the first sub- weight sets W of corresponding original resolution is deleted11 Even number line and even column to obtaining the next first sub- weight sets W12, and so on, corresponding different resolution is obtained always The sub- weight sets W of n-th first1N.For corresponding to the second sub- weight sets W21Processing method can with and so on, thus To the next second sub- weight sets W22To the sub- weight sets W of n-th second2N, this is no longer going to repeat them for the present embodiment.
In addition, the down-sampled prior art in this field of Gauss, the present embodiment also repeat no more herein.
Step S54: the N number of first sub- weight sets and N number of second sub- weight are grouped according to resolution ratio, N group is obtained and melts Close weight W, wherein every first sub- weight sets is one first fusion weight, and every second sub- weight sets is one second fusion weight.
Continuing with combination referring to Fig.2, will for example correspond to the first sub- weight sets W of original resolution11With the second sub- weight Collect W21It is divided into one group, obtains the 1st group of fusion weight W1, and so on, by the sub- weight sets W of n-th first1NWith the second sub- weight Collect W2NIt is divided into one group, obtains N group fusion weight WN.Wherein, the first sub- weight sets W11, W12... ..., W1NIt can be expressed as First fusion weight W1k, wherein the integer that k is 1 to N;Second sub- weight sets W21, W22... ..., W2NFirst can be expressed as to melt Close weight W2k
Fourth aspect:
Fourth aspect will be explained in detail based on fusion weight in the above embodiments of the present application step S15, to original fusion figure Picture and visible images carry out the second fusion, obtain the specific implementation step of final blending image.
Fig. 2 and Fig. 7 are please referred to, Fig. 7 is the flow diagram of mono- embodiment of step S15 in Fig. 1.Specifically, can To include:
Step S151: N component tomographic image I is obtained1, wherein every component tomographic imageIncluding a corresponding resolution ratio RkFirst Layered imageWith the second layered imageWherein, N number of first layer image includes original fusion image and original fusion At least one first sampled images of image, N number of second layered image include visible images and visible images at least One the second sampled images.
Fig. 2 is please referred to, for example, N number of first layer imageIncluding corresponding original resolution R1Original fusion figure As P3 and at least one first sampled images, N number of second layered imageIncluding visible images P1 and at least one Two sampled images.
Specifically, referring to Fig. 8, in the present embodiment, above-mentioned steps S151 may include:
Step S1511: using the second default sampling policy, carries out drop to original fusion image and visible images respectively and adopts Sample obtains N-1 the first sampled images of corresponding different resolution and N-1 the second sampled images of corresponding different resolution.
Second pre-sampling strategy is the down-sampled algorithm of Laplce, and the general step of the down-sampled algorithm of Laplce is right It is up-sampled again on the basis of obtained down-sampled images after original image progress down-sampling, then original image is subtracted Image after sampling, and repeat the above steps to each tomographic image.The specific algorithm down-sampled about Laplce is this field The prior art, this is no longer going to repeat them for the present embodiment.
Fig. 2 is please referred to, original fusion image P3 and visible images P1 is carried out using the second default sampling policy It is down-sampled, obtain corresponding different resolution N-1 first sampled images P32, P33 ..., P3N, and corresponding different resolutions The N-1 of rate second sampled images P12, P13 ..., P1N.
Step S1512: forming N number of first layer image by original fusion image and N-1 the first sampled images, from Light image and N-1 the second sampled images form N number of second layered image.
Continuing with combination referring to Fig.2, forming N number of first layer by P3 and N-1 the first sampled images of original fusion image ImageN number of second layered image is formed by P1 and N-1 the second sampled images of visible images
Step S152: corresponding same resolution ratio R is utilizedkRespective sets merge weight Wk, to every component tomographic imageAdded Power processing, obtains a fused subimageWherein, the first fusion weight W1kAs first layer imageWeight, second melts Close weight W2kAs the second layered imageWeight.
Continuing with combination referring to Fig.2, for original resolution R1Layered imageSample the fusion weight W of respective sets1Into Row weighting processing, obtains a fused subimageWherein the first fusion weight W of corresponding group11As first layer image's Weight, that is, original resolution original fusion image P3 weight, and correspond to group second fusion weight W21As second Layered imageWeight, that is, original resolution visible images P1 weight, and so on, the present embodiment is herein No longer repeat one by one.Same resolution ratio in the present embodiment and the application other embodiments refers to same widths resolution ratio Lk With height resolution Hk
Specifically, the first fusion weight W1kIncluding first layer imageIn each pixel p(i, j)The first sub- weightSecond fusion weight W2kIncluding the second layered imageIn each pixel p(i, j)The second sub- weight Then step S152 can specifically include in the present embodiment:
In the present embodiment, each pixel p(i, j)The first sub- weightWith the second sub- weightSum be 1, utilize corresponding first sub- weightWith the second sub- weightTo first layer imageWith the second hierarchical diagram PictureIn respective pixel point p(i, j)Value be weighted summation, obtain fused subimageMiddle respective pixel point p(i, j)'s Value.It specifically can use following formula, obtain fused subimageIn each pixel value
Wherein,For first layer imageMiddle pixel p(i, j)Value,For the second hierarchical diagram PictureMiddle pixel p(i, j)Value.Signified pixel p in the present embodiment and the application other embodiments(i, j)Value be pixel The gray value of point.
In an implement scene, each pixel p(i, j)The first sub- weightWith the second sub- weight And not be 1, then can be to first layer imageMiddle pixel p(i, j)ValueAnd second layered imageMiddle pixel p(i, j)ValueIt is weighted and averaged processing, to obtain fused subimageIn each pixel ValueSpecifically, can be calculated by following formula:
Step S153: image weight is carried out by N number of fused subimage that processing obtains is weighted to N component tomographic image respectively Structure obtains final blending image.
Wherein, the integer that k is 1 to N.Continuing with combination referring to Fig.2, by processing is weighted to N component tomographic image respectively Obtained N number of fused subimageImage reconstruction is carried out, final blending image P6 is obtained.Image reconstruction For image layered inverse process, image reconstruction is the prior art in this field, and this is no longer going to repeat them for the present embodiment.
Referring to Fig. 9, Fig. 9 is the block schematic illustration of one embodiment of the application image co-registration equipment.Specifically, this implementation Image co-registration equipment includes the memory 910 and processor 920 being mutually coupled in example, and processor 920 is for executing memory 910 The program instruction of storage, the step of to realize the image interfusion method in any of the above-described embodiment.Specifically, processor 920 is used In control memory 910 to obtain the visible images and black light image that shoot to same target scene from it, or Person, in an implement scene, image co-registration equipment can further include telecommunication circuit, and processor 920 is logical for controlling Believe circuit to obtain the visible images and black light image that shoot to same target scene.Processor 920 is also used to First fusion is carried out to visible images and black light image, obtains original fusion image, processor 920 is also used to mention respectively The first edge information of original fusion image and the second edge information of visible images are taken, processor 920 is also used to compare One marginal information and second edge information, and determine the fusion of original fusion image and visible images respectively based on comparative result Weight, processor 920 are also used to based on fusion weight, are carried out the second fusion to original fusion image and visible images, are obtained Final blending image.
Processor 920 control memory 910 and its own to realize above-mentioned image interfusion method any embodiment the step of. Processor 920 can also be known as CPU (Central Processing Unit, central processing unit).Processor 920 may be A kind of IC chip, the processing capacity with signal.Processor 920 can also be general processor, Digital Signal Processing Device (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other can Programmed logic device, discrete gate or transistor logic, discrete hardware components.General processor can be microprocessor or Person's processor is also possible to any conventional processor etc..In addition, processor 920 can be by multiple jointly real at circuit chip It is existing.
Aforesaid way carries out first based on the visible images shot to same target scene and black light image Fusion, obtains original fusion image, by the first edge information obtained to original fusion image progress edge extracting and to visible Light image carries out the second edge information that edge extracting obtains and is compared, and determines that original fusion image and visible images carry out Fusion weight when the second fusion, the fusion weight being based ultimately upon carry out second to original fusion image and visible images Fusion, to obtain final blending image.By the above-mentioned means, original fusion image remain visible images with it is invisible On the basis of the complementary information of light image, the marginal information for being again based on original fusion image and visible images is done at weighting It manages and remains local feature information, so that final blending image retains target scene information as much as possible, to the greatest extent may be used Can comprehensively, target scene is clearly described.
Wherein, in one embodiment, processor 920 is also used to compare first edge information and second edge information, and Determine that N group merges weight W based on comparative result, wherein every group of fusion weight WkIncluding a corresponding resolution ratio RkFirst fusion power Weight W1kWith the second fusion weight W2k, processor 920 is also used to obtain N component tomographic image I1, wherein every component tomographic imageIncluding A corresponding resolution ratio RkFirst layer imageWith the second layered imageWherein, N number of first layer image includes initial At least one first sampled images of blending image and original fusion image, N number of second layered image includes visible images And the second sampled images of at least one of visible images, processor 920 are also used to utilize corresponding same resolution ratio RkPhase Fusion weight W should be organizedk, to every component tomographic imageIt is weighted processing, obtains a fused subimageWherein, the first fusion Weight W1kAs first layer imageWeight, second fusion weight W2kAs the second layered imageWeight, processing Device 920, which is also used to be weighted N number of fused subimage that processing obtains to N component tomographic image respectively, carries out image reconstruction, obtains To final blending image, wherein k is 1 integer into N.
Wherein, in another embodiment, the first fusion weight W1kIncluding first layer imageIn each pixel p(i, j)The first sub- weightSecond fusion weight W2kIncluding the second layered imageIn each pixel p(i, j)? Two sub- weightsProcessor 920 is also used to utilize corresponding first sub- weightWith the second sub- weight To first layer imageWith the second layered imageIn respective pixel point p(i, j)Value be weighted summation, merged SubgraphMiddle respective pixel point p(i, j)Value.
Wherein, In yet another embodiment, first edge information includes each pixel p in original fusion image(i, j)? One edge characteristic value;Second edge information includes each pixel p in visible images(i, j)Second edge characteristic value, processor 920 are also used to corresponding pixel points p in first edge information and second edge information respectively(i, j)Edge feature value compared Compared with obtaining each pixel p(i, j)Characteristic value comparison result, processor 920 is also used to according to each pixel p(i, j)Characteristic value Comparison result determines each pixel p in original fusion image(i, j)The first sub- weight and visible images in each pixel p(i, j)The second sub- weight, wherein each pixel p(i, j)The first sub- weight and the second sub- weight separately constitute corresponding original point The sub- weight sets of the first of resolution and the second sub- weight sets, processor 920 are also used to using the first default sampling policy to corresponding former The sub- weight sets progress of the first of beginning resolution ratio is down-sampled, obtains N-1 the first sub- weight sets of corresponding different resolution, and use First default sampling policy is down-sampled to the second sub- weight sets progress of corresponding original resolution, obtains corresponding different resolution N-1 the second sub- weight sets, processor 920 be also used to according to resolution ratio by the N number of first sub- weight sets and N number of second sub- weight into Row grouping obtains N group fusion weight W, wherein every first sub- weight sets is one first fusion weight, and every second sub- weight sets is One second fusion weight.
Wherein, In yet another embodiment, processor 920 is also used to as pixel p(i, j)Corresponding first edge characteristic value When not less than second edge characteristic value, by pixel p in original fusion image(i, j)The first sub- weight be set as the first default weight Value, it will be seen that pixel p in light image(i, j)The second sub- weight be set as the second default weighted value, alternatively, processor 920 is also used In as pixel p(i, j)When corresponding first edge characteristic value is less than second edge characteristic value, by pixel in original fusion image p(i, j)The first sub- weight be set as the second default weighted value, it will be seen that pixel p in light image(i, j)The second sub- weight be set as First default weighted value, wherein the sum of the first default weighted value and the second default weighted value are 1.
Wherein, In yet another embodiment, processor 920 is also used to using the second default sampling policy, respectively to initial Blending image and visible images progress are down-sampled, obtain N-1 the first sampled images of corresponding different resolution and correspond to not With N-1 the second sampled images of resolution ratio, processor 920 is also used to by original fusion image and N-1 the first sampled images N number of first layer image is formed, forms N number of second layered image by visible images and N-1 the second sampled images.
Wherein, In yet another embodiment, processor 920 is also used to extract the color information and brightness letter of visible images The luminance information of breath and black light image, processor 920 be also used to it will be seen that the luminance information of light image replace with can not Light-exposed luminance information, and original fusion image is formed by the color information of visible images and replaced luminance information.
Wherein, In yet another embodiment, processor 920 is also used to it will be seen that light image and black light image turn respectively Preset color space is changed to, processor 920 is also used to respectively will be bright in the visible images and black light image after conversion Component and color component separation are spent, the brightness of the color component and luminance component and black light image of visible images is obtained Component.
Wherein, In yet another embodiment, processor 920 is also used to using default Boundary extracting algorithm, to original fusion Image and visible images carry out edge extracting, and correspondence obtains first edge image and second edge image, wherein first edge It include first edge information in image, second edge image includes second edge information, in an implement scene, image co-registration Equipment can further include picture pick-up device 930, such as black light camera, and processor 920 is also used to control picture pick-up device 930, Such as black light camera, a target scene is shot, visible images and black light image are obtained.Wherein, an implementation field Jing Zhong, the black light image shot is infrared image, in other implement scenes, the black light image that shoots It can also be the image, such as laser image etc. in addition to infrared image.
Referring to Fig. 10, Figure 10 is the block schematic illustration of 1,000 1 embodiment of the application storage device.The application stores dress It sets 1000 and is stored with the program instruction 1010 that can be run by processor, program instruction 1010 melts for realizing any of the above-described image Step in the embodiment of conjunction method.
The storage device 1000 is specifically as follows USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. can store program and refer to 1010 medium is enabled, or may be the server for being stored with the program instruction 1010, which can be by the program of storage Instruction 1010 is sent to other equipment operation, or can also be with the program instruction 1010 of the self-operating storage.
Figure 11 is please referred to, Figure 11 is the block schematic illustration of another embodiment of the application image co-registration equipment 1100.Specifically, The application image co-registration equipment 1100 includes obtaining module 1110, the first Fusion Module 1120, edge extracting module 1130, weight Determining module 1140 and the second Fusion Module 1150, wherein obtain module 1110 and same target scene is shot for obtaining The visible images and black light image arrived;First Fusion Module 1120 be used for visible images and black light image into Row first merges, and obtains original fusion image;Edge extracting module 1130 for extracting the first side of original fusion image respectively The second edge information of edge information and visible images;Weight determination module 1140 is for comparing first edge information and the second side Edge information, and determine the fusion weight of original fusion image and visible images respectively based on comparative result;Second Fusion Module 1150 for carrying out the second fusion to original fusion image and visible images, obtaining final blending image based on fusion weight.
Aforesaid way enables to final blending image to retain target scene information as much as possible, with as far as possible comprehensively, Target scene is clearly described.
Wherein, in one embodiment, weight determination module 1140 is specifically used for comparing first edge information and the second side Edge information, and determine that N group merges weight W based on comparative result, wherein every group of fusion weight WkIncluding a corresponding resolution ratio Rk's First fusion weight W1kWith the second fusion weight W2k, the second Fusion Module 1150 includes: obtaining unit, for obtaining N component layers Image I1, wherein every component tomographic imageIncluding a corresponding resolution ratio RkFirst layer imageWith the second layered imageWherein, N number of first layer image includes at least one first sample graph of original fusion image and original fusion image Picture, N number of second layered image include at least one second sampled images of visible images and visible images;Second fusion Module 1150 includes weighted units, for utilizing corresponding same resolution ratio RkRespective sets merge weight Wk, to every component tomographic imageIt is weighted processing, obtains a fused subimageWherein, the first fusion weight W1kAs first layer imagePower Weight, the second fusion weight W2kAs the second layered imageWeight;Second Fusion Module 1150 further includes reconfiguration unit, is used Image reconstruction is carried out in will N number of fused subimage that processing obtains be weighted to N component tomographic image respectively, is finally merged Image;Wherein, k is 1 integer into N.
Wherein, in another embodiment, the first fusion weight W1kIncluding first layer imageIn each pixel p(i, j)The first sub- weightSecond fusion weight W2kIncluding the second layered imageIn each pixel p(i, j)? Two sub- weightsThe weighted units of second Fusion Module 1150 are specifically used for utilizing corresponding first sub- weight With the second sub- weightTo first layer imageWith the second layered imageIn respective pixel point p(i, j)Value It is weighted summation, obtains fused subimageMiddle respective pixel point p(i, j)Value.
Wherein, In yet another embodiment, first edge information includes each pixel p in original fusion image(i, j)? One edge characteristic value;Second edge information includes each pixel p in visible images(i, j)Second edge characteristic value, weight is true Cover half block 1140 includes: comparing unit, for respectively by corresponding pixel points p in first edge information and second edge information(i, j) Edge feature value be compared, obtain each pixel p(i, j)Characteristic value comparison result;Weight determination module 1140 further includes Determination unit, for according to each pixel p(i, j)Characteristic value comparison result, determine each pixel p in original fusion image(i, j) The first sub- weight and visible images in each pixel p(i, j)The second sub- weight, wherein each pixel p(i, j)First son Weight and the second sub- weight separately constitute the first sub- weight sets and the second sub- weight sets of corresponding original resolution;Weight determines mould Block 1140 further includes sampling unit, for using the first default sampling policy to the first sub- weight sets of corresponding original resolution into Row is down-sampled, obtains N-1 the first sub- weight sets of corresponding different resolution, and using the first default sampling policy to corresponding former The sub- weight sets progress of the second of beginning resolution ratio is down-sampled, obtains N-1 the second sub- weight sets of corresponding different resolution;Weight is true Cover half block 1140 further includes grouped element, for being divided the N number of first sub- weight sets and N number of second sub- weight according to resolution ratio Group obtains N group fusion weight W, wherein every first sub- weight sets is one first fusion weight, and every second sub- weight sets is one the Two fusion weights.
Wherein, In yet another embodiment, if the determination unit of weight determination module 1140 is specifically used for judgement pixel p(i, j)Corresponding first edge characteristic value is not less than second edge characteristic value, then by pixel p in original fusion image(i, j)'s First sub- weight is set as the first default weighted value, it will be seen that pixel p in light image(i, j)The second sub- weight to be set as second default Weighted value;If weight determination module 1140 is also used to judge pixel p(i, j)Corresponding first edge characteristic value is less than second edge Characteristic value, then by pixel p in original fusion image(i, j)The first sub- weight be set as the second default weighted value, it will be seen that light figure The pixel p as in(i, j)The second sub- weight be set as the first default weighted value;Wherein, the first default weighted value and the second default power The sum of weight values are 1.
Wherein, In yet another embodiment, the obtaining unit of the second Fusion Module 1150 is specifically used for default using second Sampling policy, respectively carries out original fusion image and visible images down-sampled, obtains N-1 of corresponding different resolution the N-1 the second sampled images of one sampled images and corresponding different resolution;By original fusion image and N-1 the first sample graphs As forming N number of first layer image, N number of second layered image is formed by visible images and N-1 the second sampled images.
Wherein, In yet another embodiment, the first Fusion Module 1120 is specifically used for extracting the color letter of visible images The luminance information of breath and luminance information and black light image;It will be seen that the luminance information of light image replaces with black light figure The luminance information of picture, and original fusion image is formed by the color information of visible images and replaced luminance information.
Wherein, In yet another embodiment, the first Fusion Module 1120 is also used to it will be seen that light image and black light figure As being transformed into preset color space respectively;Respectively by the visible images and black light image after conversion luminance component and Color component separation, obtains the color component of visible images and the luminance component of luminance component and black light image.
Wherein, In yet another embodiment, edge extracting module 1130 is specifically used for using default Boundary extracting algorithm, right Original fusion image and visible images carry out edge extracting, and correspondence obtains first edge image and second edge image, wherein It include first edge information in first edge image, second edge image includes second edge information.
Wherein, In yet another embodiment, module 1110 is obtained to be specifically used for clapping a target scene using black light camera It takes the photograph, obtains visible images and black light image, wherein black light image is infrared image.
In several embodiments provided herein, it should be understood that disclosed method and apparatus can pass through it Its mode is realized.For example, device embodiments described above are only schematical, for example, stroke of module or unit Point, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can To combine or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or beg for The mutual coupling, direct-coupling or communication connection of opinion can be through some interfaces, the INDIRECT COUPLING of device or unit Or communication connection, it can be electrical property, mechanical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks On unit.It can select some or all of unit therein according to the actual needs to realize the mesh of present embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
It, can if integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product To be stored in a computer readable storage medium.Based on this understanding, the technical solution of the application substantially or Say that all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products Out, which is stored in a storage medium, including some instructions are used so that a computer equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute each implementation of the application The all or part of the steps of methods.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. it is various It can store the medium of program code.

Claims (13)

1. a kind of image interfusion method characterized by comprising
Obtain the visible images and black light image shot to same target scene;
First fusion is carried out to the visible images and black light image, obtains original fusion image;
The first edge information of the original fusion image and the second edge information of the visible images are extracted respectively;
Compare the first edge information and the second edge information, and determines the original fusion respectively based on comparative result The fusion weight of image and the visible images;
Based on the fusion weight, the second fusion is carried out to the original fusion image and the visible images, is obtained final Blending image.
2. the method according to claim 1, wherein the first edge information and second side Edge information, and determine the fusion weight of the original fusion image and the visible images respectively based on comparative result, comprising:
Compare the first edge information and the second edge information, and determine that N group merges weight W based on comparative result, In, every group of fusion weight WkIncluding a corresponding resolution ratio RkFirst fusion weight W1kWith the second fusion weight W2k
It is described to be based on the fusion weight, the second fusion is carried out to the original fusion image and the visible images, is obtained Final blending image, comprising:
Obtain N component tomographic image I1, wherein every component tomographic imageIncluding a corresponding resolution ratio RkFirst layer imageWith Second layered imageWherein, N number of first layer image includes the original fusion image and the original fusion At least one first sampled images of image, N number of second layered image include the visible images and described visible At least one second sampled images of light image;
Utilize the same resolution ratio R of correspondencekRespective sets merge weight Wk, to every component tomographic imageIt is weighted processing, obtains one Fused subimageWherein, the first fusion weight W1kAs the first layer imageWeight, described second melts Close weight W2kAs second layered imageWeight;
Image reconstruction is carried out by N number of fused subimage that the weighting is handled is carried out to N component tomographic image respectively, is obtained To final blending image;
Wherein, the k is 1 integer into N.
3. according to the method described in claim 2, it is characterized in that, the first fusion weight W1kIncluding the first layer figure PictureIn each pixel p(i,j)The first sub- weightThe second fusion weight W2kIncluding second hierarchical diagram PictureIn each pixel p(i,j)The second sub- weight
It is described to utilize corresponding same resolution ratio RkRespective sets merge weight Wk, to every component tomographic imageIt is weighted processing, is obtained To a fused subimageInclude:
Utilize the corresponding first sub- weightWith the described second sub- weightTo the first layer imageWith second layered imageIn respective pixel point p(i,j)Value be weighted summation, obtain the fused subimageMiddle respective pixel point p(i,j)Value.
4. according to the method described in claim 3, it is characterized in that, the first edge information includes the original fusion image In each pixel p(i,j)First edge characteristic value;The second edge information includes each pixel in the visible images p(i,j)Second edge characteristic value;
The first edge information and the second edge information, and determine that N group merges weight based on comparative result W, comprising:
Respectively by corresponding pixel points p in the first edge information and the second edge information(i,j)Edge feature value carry out Compare, obtains each pixel p(i,j)Characteristic value comparison result;
According to each pixel p(i,j)Characteristic value comparison result, determine each pixel p in the original fusion image(i,j)'s Each pixel p in first sub- weight and the visible images(i,j)The second sub- weight, wherein each pixel p(i,j)'s First sub- weight and the second sub- weight separately constitute the first sub- weight sets and the second sub- weight sets of corresponding original resolution;
It is down-sampled using first sub- weight sets progress of the first default sampling policy to the corresponding original resolution, it is corresponded to N-1 the first sub- weight sets of different resolution, and using the described first default sampling policy to the corresponding original resolution The second sub- weight sets carry out down-sampled, obtain N-1 the second sub- weight sets of corresponding different resolution;
N number of first sub- weight sets and N number of second sub- weight are grouped according to resolution ratio, obtain N group fusion power Weight W, wherein be the first fusion weight per the described first sub- weight sets, be one described the per the described second sub- weight sets Two fusion weights.
5. according to the method described in claim 4, it is characterized in that, described according to each pixel p(i,j)Characteristic value compare As a result, determining each pixel p in first blending image(i,j)The first sub- weight and the visible images in each pixel p(i,j)The second sub- weight, comprising:
If the pixel p(i,j)Corresponding first edge characteristic value is not less than the second edge characteristic value, then will be described initial Pixel p described in blending image(i,j)The first sub- weight be set as the first default weighted value, will be described in the visible images Pixel p(i,j)The second sub- weight be set as the second default weighted value;
If the pixel p(i,j)Corresponding first edge characteristic value is less than the second edge characteristic value, then initially melts described Close pixel p described in image(i,j)The first sub- weight be set as the described second default weighted value, by institute in the visible images State pixel p(i,j)The second sub- weight be set as the described first default weighted value;
Wherein, the sum of the described first default weighted value and the described second default weighted value are 1.
6. according to the method described in claim 2, it is characterized in that, the acquisition N component tomographic image I1, comprising:
It is down-sampled to the original fusion image and visible images progress respectively using the second default sampling policy, it obtains To N-1 first sampled images of corresponding different resolution and N-1 second sample graphs of corresponding different resolution Picture;
N number of first layer image is formed by the original fusion image and the N-1 the first sampled images, by the visible light Image and the N-1 the second sampled images form N number of second layered image.
7. the method according to claim 1, wherein it is described to the black light image and visible images into Row first merges, and obtains original fusion image, comprising:
Extract the color information of the visible images and the luminance information of luminance information and the black light image;
The luminance information of the visible images is replaced with to the luminance information of the black light image, and by the visible light The color information of image and replaced luminance information form the original fusion image.
8. the method according to the description of claim 7 is characterized in that
The luminance information of the color information for extracting the visible images and luminance information and the black light image, Include:
The visible images and black light image are transformed into preset color space respectively;
Respectively by the visible images and black light image after conversion luminance component and color component separate, obtain The luminance component of the color component and luminance component of the visible images and the black light image.
9. the method according to claim 1, wherein
The second edge information of the first edge information for extracting the original fusion image respectively and the visible images, Include:
Using default Boundary extracting algorithm, edge extracting is carried out to the original fusion image and the visible images, it is corresponding First edge image and second edge image are obtained, wherein including the first edge information, institute in the first edge image Stating second edge image includes the second edge information;
It is described to obtain the visible images and black light image shot to same target scene, comprising:
One target scene is shot using black light camera, obtains visible images and black light image, wherein described invisible Light image is infrared image.
10. a kind of image co-registration equipment, which is characterized in that including the memory and processor being mutually coupled;
The processor is used to execute the program instruction of the memory storage, to realize that claim 1 to 9 is described in any item Method.
11. equipment according to claim 10, which is characterized in that further include picture pick-up device, obtain visible light for shooting Image and black light image.
12. a kind of image co-registration equipment characterized by comprising
Module is obtained, for obtaining the visible images and black light image that shoot to same target scene;
First Fusion Module obtains original fusion for carrying out the first fusion to the visible images and black light image Image;
Edge extracting module, for extract respectively the original fusion image first edge information and the visible images Second edge information;
Weight determination module for the first edge information and the second edge information, and divides based on comparative result The fusion weight of the original fusion image and the visible images is not determined;
Second Fusion Module carries out the original fusion image and the visible images for being based on the fusion weight Second fusion, obtains final blending image.
13. a kind of storage device, which is characterized in that be stored with the program instruction that can be run by processor, described program instruction For realizing the described in any item methods of claim 1 to 9.
CN201910436319.1A 2019-05-23 2019-05-23 Image fusion method and device, and storage device Active CN110210541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910436319.1A CN110210541B (en) 2019-05-23 2019-05-23 Image fusion method and device, and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910436319.1A CN110210541B (en) 2019-05-23 2019-05-23 Image fusion method and device, and storage device

Publications (2)

Publication Number Publication Date
CN110210541A true CN110210541A (en) 2019-09-06
CN110210541B CN110210541B (en) 2021-09-03

Family

ID=67788443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910436319.1A Active CN110210541B (en) 2019-05-23 2019-05-23 Image fusion method and device, and storage device

Country Status (1)

Country Link
CN (1) CN110210541B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796629A (en) * 2019-10-28 2020-02-14 杭州涂鸦信息技术有限公司 Image fusion method and system
CN111563552A (en) * 2020-05-06 2020-08-21 浙江大华技术股份有限公司 Image fusion method and related equipment and device
CN111698434A (en) * 2019-03-11 2020-09-22 佳能株式会社 Image processing apparatus, control method thereof, and computer-readable storage medium
WO2020238416A1 (en) * 2019-05-31 2020-12-03 华为技术有限公司 Image processing method and related device
CN113810557A (en) * 2020-06-17 2021-12-17 株式会社理光 Image processing apparatus and image reading method
CN116228618A (en) * 2023-05-04 2023-06-06 中科三清科技有限公司 Meteorological cloud image processing system and method based on image recognition
CN113810557B (en) * 2020-06-17 2024-04-30 株式会社理光 Image processing apparatus and image reading method

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090010633A1 (en) * 2007-07-06 2009-01-08 Flir Systems Ab Camera and method for use with camera
US20140321712A1 (en) * 2012-08-21 2014-10-30 Pelican Imaging Corporation Systems and Methods for Performing Depth Estimation using Image Data from Multiple Spectral Channels
CN104683767A (en) * 2015-02-10 2015-06-03 浙江宇视科技有限公司 Fog penetrating image generation method and device
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
CN107103331A (en) * 2017-04-01 2017-08-29 中北大学 A kind of image interfusion method based on deep learning
US20170287190A1 (en) * 2014-12-31 2017-10-05 Flir Systems, Inc. Image enhancement with fusion
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN107580163A (en) * 2017-08-12 2018-01-12 四川精视科技有限公司 A kind of twin-lens black light camera
CN107730482A (en) * 2017-09-28 2018-02-23 电子科技大学 A kind of sparse blending algorithm based on region energy and variance
WO2018048231A1 (en) * 2016-09-08 2018-03-15 Samsung Electronics Co., Ltd. Method and electronic device for producing composite image
CN108364272A (en) * 2017-12-30 2018-08-03 广东金泽润技术有限公司 A kind of high-performance Infrared-Visible fusion detection method
US20180373948A1 (en) * 2017-06-22 2018-12-27 Qisda Corporation Image capturing device and image capturing method
CN109670522A (en) * 2018-09-26 2019-04-23 天津工业大学 A kind of visible images and infrared image fusion method based on multidirectional laplacian pyramid
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090010633A1 (en) * 2007-07-06 2009-01-08 Flir Systems Ab Camera and method for use with camera
US20140321712A1 (en) * 2012-08-21 2014-10-30 Pelican Imaging Corporation Systems and Methods for Performing Depth Estimation using Image Data from Multiple Spectral Channels
US20170287190A1 (en) * 2014-12-31 2017-10-05 Flir Systems, Inc. Image enhancement with fusion
CN104683767A (en) * 2015-02-10 2015-06-03 浙江宇视科技有限公司 Fog penetrating image generation method and device
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
WO2018048231A1 (en) * 2016-09-08 2018-03-15 Samsung Electronics Co., Ltd. Method and electronic device for producing composite image
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
CN107103331A (en) * 2017-04-01 2017-08-29 中北大学 A kind of image interfusion method based on deep learning
US20180373948A1 (en) * 2017-06-22 2018-12-27 Qisda Corporation Image capturing device and image capturing method
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN107580163A (en) * 2017-08-12 2018-01-12 四川精视科技有限公司 A kind of twin-lens black light camera
CN107730482A (en) * 2017-09-28 2018-02-23 电子科技大学 A kind of sparse blending algorithm based on region energy and variance
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device
CN108364272A (en) * 2017-12-30 2018-08-03 广东金泽润技术有限公司 A kind of high-performance Infrared-Visible fusion detection method
CN109670522A (en) * 2018-09-26 2019-04-23 天津工业大学 A kind of visible images and infrared image fusion method based on multidirectional laplacian pyramid

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MOHAMED AWAD: "《A Real-Time FPGA Implementation of Visible Near Infrared Fusion Based Image Enhancement》", 《2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 *
李昌兴: "《基于FPDEs与CBF的红外与可见光图像融合》", 《计算机科学》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111698434A (en) * 2019-03-11 2020-09-22 佳能株式会社 Image processing apparatus, control method thereof, and computer-readable storage medium
CN111698434B (en) * 2019-03-11 2022-05-03 佳能株式会社 Image processing apparatus, control method thereof, and computer-readable storage medium
US11423524B2 (en) 2019-03-11 2022-08-23 Canon Kabushiki Kaisha Image processing apparatus, method for controlling image processing apparatus, and non- transitory computer-readable storage medium
WO2020238416A1 (en) * 2019-05-31 2020-12-03 华为技术有限公司 Image processing method and related device
CN110796629A (en) * 2019-10-28 2020-02-14 杭州涂鸦信息技术有限公司 Image fusion method and system
CN110796629B (en) * 2019-10-28 2022-05-17 杭州涂鸦信息技术有限公司 Image fusion method and system
CN111563552A (en) * 2020-05-06 2020-08-21 浙江大华技术股份有限公司 Image fusion method and related equipment and device
CN113810557A (en) * 2020-06-17 2021-12-17 株式会社理光 Image processing apparatus and image reading method
CN113810557B (en) * 2020-06-17 2024-04-30 株式会社理光 Image processing apparatus and image reading method
CN116228618A (en) * 2023-05-04 2023-06-06 中科三清科技有限公司 Meteorological cloud image processing system and method based on image recognition

Also Published As

Publication number Publication date
CN110210541B (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN110210541A (en) Image interfusion method and equipment, storage device
Tursun et al. An objective deghosting quality metric for HDR images
Shao et al. Remote sensing image fusion with deep convolutional neural network
Liu et al. MLFcGAN: Multilevel feature fusion-based conditional GAN for underwater image color correction
Liu et al. End-to-end single image fog removal using enhanced cycle consistent adversarial networks
Lore et al. LLNet: A deep autoencoder approach to natural low-light image enhancement
Vanmali et al. Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility
Zhang et al. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application
CN109376667A (en) Object detection method, device and electronic equipment
CN108769550B (en) Image significance analysis system and method based on DSP
CN107248174A (en) A kind of method for tracking target based on TLD algorithms
CN109493283A (en) A kind of method that high dynamic range images ghost is eliminated
CN109741240A (en) A kind of more flat image joining methods based on hierarchical clustering
US20220122360A1 (en) Identification of suspicious individuals during night in public areas using a video brightening network system
Banerjee et al. Nighttime image-dehazing: a review and quantitative benchmarking
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
Ulhaq et al. FACE: Fully automated context enhancement for night-time video sequences
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
CN114627269A (en) Virtual reality security protection monitoring platform based on degree of depth learning target detection
CN109410161A (en) A kind of fusion method of the infrared polarization image separated based on YUV and multiple features
Rao et al. An Efficient Contourlet-Transform-Based Algorithm for Video Enhancement.
Pawłowski et al. Visualization techniques to support CCTV operators of smart city services
Hovhannisyan et al. AED-Net: A single image dehazing
CN113159229B (en) Image fusion method, electronic equipment and related products
Zheng et al. Overwater image dehazing via cycle-consistent generative adversarial network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant