CN104021559B - Image registration method based on mutual information and Harris corner point detection - Google Patents

Image registration method based on mutual information and Harris corner point detection Download PDF

Info

Publication number
CN104021559B
CN104021559B CN201410269698.7A CN201410269698A CN104021559B CN 104021559 B CN104021559 B CN 104021559B CN 201410269698 A CN201410269698 A CN 201410269698A CN 104021559 B CN104021559 B CN 104021559B
Authority
CN
China
Prior art keywords
image
registration
subject
rectangle
rectangle frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410269698.7A
Other languages
Chinese (zh)
Other versions
CN104021559A (en
Inventor
马文萍
焦李成
范霞妃
公茂果
马晶晶
王爽
杨淑媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410269698.7A priority Critical patent/CN104021559B/en
Publication of CN104021559A publication Critical patent/CN104021559A/en
Application granted granted Critical
Publication of CN104021559B publication Critical patent/CN104021559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an image registration method based on improved mutual information and Harris corner point features. The method mainly solves the problems that through a traditional method, registration consumes much time and registration accuracy is low. The method includes the implementation steps that firstly, an image F to be registered and a reference image R are input; secondly, rectangular frames are placed in any position of the image F to be registered and any position of the reference image R, and most similar rectangular frames are obtained; thirdly, corner point features of the most similar rectangular frames are extracted, and matching points in the similar rectangular frames are obtained; fourthly, the similar rectangular frames on which the matching points are extracted are registered after mismatching points are deleted, and registered transformation parameters are recorded; fifthly, affine transformation is conducted on the image F to be registered through the transformation parameters; sixthly, the image obtained after affine transformation is fused with the reference image R, and then a registered image is obtained. The image registration method has the advantages of being short in consumed registration time and high in registration accuracy, and can be used for registration of medical images, natural images and synthetic aperture radar images.

Description

Based on mutual information and the method for registering images of Harris Corner Detections
Technical field
The invention belongs to technical field of image processing, further relate to image registration techniques field based on it is improved mutually The method for registering images of information and Harris Corner Detections, can be used for the purpose of target recognition.
Background technology
Intelligentized Information, particularly Target Recognition, are that all kinds of precise guidance leaders and background signal are dry Disturbing carries out the technology of real-time processing, is an important symbol of precision guided weapon intelligence degree.The target of single image is known Not often all with certain randomness and interference effect, reliability and stability are all poor, which results in the recognizability of target And accuracy declines, the figure to obtaining in different time or different location from different sensors or same sensor is needed for this As carrying out registration.
At present, people are more using the simple method for registering images based on mutual information, and this method mainly make use of figure Such as the half-tone information of picture carries out registration, and this kind of method generally has certain defect, and image subject to registration contains substantial amounts of same During class region, such as sky, desert, ocean, meadow etc., such image is howsoever matched, and the association relationship for drawing all is most Big, it is difficult to realize accurately registration.Existing part researcher is proposed, by the characteristic information of image in combination with half-tone information Method for registering images, although such method has certain improvement than carrying out registering method using half-tone information merely, this The method of sample needs to carry out global parameter Optimizing Search, and its convergence rate is slow, and the feature for obtaining is subject to influence of noise larger, Cause registration result inaccurate so that the accuracy of target recognition is reduced.
The content of the invention
Present invention aims to the deficiency of above-mentioned prior art, proposes one kind based on mutual information and Harris angle points The method for registering images of detection, to improve with Quasi velosity and precision.
For achieving the above object, technical scheme comprises the steps:
(1) image F subject to registration and reference image R are input into;
(2) n rectangle frame is arbitrarily placed in image F subject to registration and reference image R, n is obtained to rectangle frame, rectangle frame Size is arranged according to the similar block size of the two images;
(3) mutual information of the rectangle frame of the rectangle frame of calculating image F subject to registration and reference image R, and with genetic algorithm more Newly optimize the mutual information, produce a pair of maximum rectangle frames of association relationship as similar rectangle frame;
(4) smooth operation is carried out to two similar rectangle frames with Gaussian smoothing window, then is carried with Harris corner detection approach The one-to-one angle point of two similar rectangle inframes is taken, k angle steel joints pair are obtained, wherein k is greater than being equal to 1 integer;
(5) respectively calculate i-th pair angle point on x directions and y directions apart from dxi, dyi, and all angle points are in x side To the mean μ with y directions distancexyAnd variances sigmaxy;The quantity of the matching double points according to needed for matching two rectangle frames, if Thresholding ω is put, whether the distance for judging angle point meets following formula, if meeting match point is considered as, be otherwise considered as Mismatching point, by it Delete,
|dxix|≤ωσx;I=0,1,2......k;
|dyiy|≤ωσy;I=0,1,2......k,
Wherein the scope of i is 0≤i≤k, and k is the sum of extracted angle point pair;
(6) according to registering image, whether set angle point amount threshold δ judges matching double points quantity more than δ, if More than δ, then next step is performed;Otherwise return to step (5);
(7) affine transformation is carried out to the rectangle frame in image F subject to registration, often converts and once calculate once two similar rectangles The mutual information of frame, and be optimized with genetic algorithm, when the association relationship of two rectangle frames is maximum, record conversion ginseng now The angle of numerical value, that is, the distance for translating and rotation;
(8) according to the transformation parameter value of above-mentioned record, affine transformation is carried out to image F subject to registration, after obtaining affine transformation Image F1;
(9) the image F1 after affine transformation is merged with reference image R, is obtained the image after registration.
The present invention has compared with prior art advantages below:
First, the present invention extracts the similar rectangle frame of two width images, that is, region most like in two images, rejects Signal interference factor in image subject to registration and reference picture, this method need not do other a priori assumptions, it is not required that Split and pretreatment so that image registration results are more accurate.
Second, the present invention is employed in feature extraction to the similar rectangle frame of image subject to registration and reference picture Harris corner detection approach, it need not carry out the detection of Corner Feature to the overall situation, it is only necessary to detect the angle point letter in similar rectangle frame Breath, reduces algorithm complex, effectively saves the time needed for process of image registration.
Description of the drawings
Fig. 1 is the flowchart of the present invention;
Fig. 2 is the registration result figure that the present invention is applied to medical image;
Fig. 3 is the registration result figure that the present invention is applied to natural image;
Fig. 4 is the registration result figure that the present invention is applied to synthetic aperture radar SAR image.
Specific embodiment
Technical scheme and effect are described in further detail below in conjunction with the accompanying drawings.
With reference to Fig. 1, the implementation steps of the present invention are as follows:
Step 1, is input into image F subject to registration and reference image R.
In this step, the image of the different characteristic of input, is obtaining most like rectangle frame and is extracting Corner Feature Process can produce certain difference, when the image F subject to registration and reference image R of input are medical images, due to two width images In be likely to occur difference section, during registration, find the most like region of two width images particularly significant.When treating for input When registering image F and reference image R are natural images, image is hardly affected by noise, and profile is also very distinct, by this It is bright to complete high-precision registration.When the image F subject to registration and reference image R of input are synthetic aperture radar SAR images, this Class image is affected by noise big, thereby increases and it is possible to difference section occur, need in registration process appropriate adjusting parameter.
Step 2, in image F subject to registration and reference image R optional position rectangle frame is placed.
Size identical n rectangle frame is placed respectively in optional position in image F subject to registration and reference image R, obtains n pair Rectangle frame, the size of each rectangle frame is arranged according to the similar block size of the two images.As the image F subject to registration and ginseng of input When to examine image R be medical image, due to being likely to occur difference section in two width images, when most like region is obtained, need to reduce The size of similar rectangle frame.When the image F subject to registration and reference image R of input are synthetic aperture radar SAR images, such figure As affected by noise big, thereby increases and it is possible to difference section occur, registering difficulty is larger, need the size for suitably reducing similar rectangle frame. When the image F subject to registration and reference image R of input are natural images, image is hardly affected by noise, and profile is also very fresh Bright, similar area is larger, it is possible to increase the size of rectangle frame.
Step 3, in image F subject to registration and reference image R most like rectangle frame is obtained.
In this step, the mutual information of the rectangle frame of the rectangle frame and reference image R that calculate image F subject to registration is passed through, And optimize the mutual information with genetic algorithm renewal, a pair of maximum rectangle frames of association relationship are produced as similar rectangle frame.In fact Existing step is as follows:
(3.1) joint probability density of two rectangle frames and the conditional probability density of each rectangle frame are calculated respectively:
Wherein, PF,R(DNF,DNR) represent the joint probability of the rectangle frame of image F subject to registration and the rectangle frame of reference image R Density
PR(DNR) represent reference image R rectangle frame conditional probability density;
PF(DNF) represent image F subject to registration rectangle frame conditional probability density;
H is the joint grey level histogram of the rectangle frame of the rectangle frame of image F subject to registration and reference image R;
h(DNF,DNR) represent that gray value is DN in the rectangle frame of image F subject to registrationF, the rectangle of reference image R
Gray value is DN in frameRCombination of pixels occur number of times;The gray value of the rectangle inframe for image F subject to registration Its excursion be (0, m), for the gray value of the rectangle inframe of reference image R, its excursion be (0, situation n), i.e., DNFAnd DNRMeet 0≤DNF≤m,0≤DNR≤ n, m are the maximums of the rectangle inframe pixel grey scale of image F subject to registration, and n is ginseng Examine the maximum of the rectangle inframe pixel grey scale of image R.
(3.2) association relationship of rectangle inframe is calculated using the parameter of step (3.1), is calculated in accordance with the following steps:
(3.2.1) conditional probability density P of the rectangle frame of image F subject to registration is utilizedF(DNF), calculate image F's subject to registration The entropy HF of rectangle inframe:
(3.2.2) using rectangle frame conditional probability density P of reference image RR(DNR), calculate the rectangle frame of reference image R Interior entropy HR:
(3.2.3) joint probability density P of the rectangle frame of the rectangle frame and reference image R of image F subject to registration is utilizedF,R (DNF,DNR), calculate combination entropy HFR of image F subject to registration and the rectangle inframe of reference image R:
(3.2.4) the entropy HF of the rectangle frame of image F subject to registration, the entropy HR of the rectangle frame of reference image R, figure subject to registration are utilized As F and combination entropy HFR of the rectangle frame of reference image R, the mutual information of image F subject to registration and the rectangle frame of reference image R is calculated MI (F, R):
MI (F, R)=HF+HR-HFR.
(3.3) association relationship of optimization rectangle frame pair is updated with genetic algorithm, most like rectangle frame, optimization process is found Carry out in accordance with the following steps:
(3.3.1) using obtain n to rectangle frame as initial population, each of which is an individual to rectangle frame;
(3.3.2) by selecting, intersecting and make a variation, initial population is evolved:
(3.3.2a) according to the size of ideal adaptation angle value, the big individuality of fitness value, the calculating of fitness value are selected Formula adopts the formula of the association relationship MI (F, R) calculated in step (3.2.4), i.e.,:
Eval=H (IF)+H(IR)-H(IF,IR)
What fitness eval here was measured is the size of association relationship, and the bigger individuality of fitness, association relationship is got over Greatly, evaluated by the fitness to population, and carry out Optimizing Search, it is final to produce the maximum individuality of association relationship;
(3.3.2b) individuality of two parents, a certain position for randomly choosing their chromosomes is arbitrarily selected to swap, Obtain two new filial generations, this process can be conducive to finding more excellent than the individuality in initial population with expanded search space Elegant new individual;
(3.3.2c) individuality is randomly choosed, carries out gene mutation operation, this process can increase the multiformity of population, effectively Algorithm is avoided to be absorbed in local optimum;
(3.3.3) stop condition is judged, if the evolutionary generation of population reached for 100 generations or the fitness value of individuality is more than 0.9, then stop iteration, and produce optimal solution, that is, a pair of most like rectangle frames are obtained, otherwise, return to step (3.3.2) continues Iteration.Here 0.9 is empirical value, is not changeless, for some special images, due to the shadow by noise Ring, or part is had differences between image subject to registration and reference picture, their association relationship is unable to reach 0.9 forever, this When can suitably reduce the size of this numerical value.
Step 4, with Harris corner detection approach the Corner Feature of most like rectangle frame is extracted.
This step adopts Harris corner detection approach, is because existing many Corner Detections, is along level Carry out with vertical 2 directions, or level, vertical and 3 directions of diagonal are carried out, and are easily angle point by edge feature error detection Feature, because Harris corner detection approach is launched along Taylor's formula, it can be carried out along all directions, can be effectively Avoid that edge feature is detected as the mistake of Corner Feature by mistake.Its extraction step is as follows:
(4.1) three autocorrelation parameter A, the value of B and C are calculated to each pixel:
Wherein, w (x, y) is Gaussian function,σ is variance, the seat that x is horizontally oriented Mark, the coordinate that y is vertically oriented,It is convolution symbol, IxFor the difference of horizontal direction, IyFor the difference of vertical direction;At this Gaussian function is adopted during individual, is because that the position weight of distance center point is big, further away from the position of central point, weight is got over It is little, effect of noise can be so reduced, improve the precision of Corner Detection;
(4.2) using the calculated three auto-correlation function A of step (4.1), the value of B and C calculates each pixel Harris angle points respond R:
R=(AB-C2)2-k(A+B)2, k values are the Arbitrary Digit between 0.04-0.06;
(4.3) judge whether pixel is angle point, if the angle point response R in step (4.2) is more than 0.8, be considered as angle point And extracted, otherwise, it is considered as non-angle point;Here 0.8 not immobilizes, for some special images, due to Need to extract more Corner Feature, can suitably reduce the size of this numerical value.
This step is that the rectangle frame similar to two carries out Corner Detection, due to the similarity of rectangle frame, what detection was obtained Angle point is also one-to-one, i.e., rectangle of the angle point of canthus position corresponding to reference image R in the rectangle frame of image F subject to registration The angle point of canthus position in frame, this angle steel joint is referred to as angle point pair.The quantity of the angle point pair that this step is obtained is designated as k pair, wherein k It is greater than being equal to 1 integer.
Step 5, obtains the matching double points of similar rectangle inframe.
(5.1) in step 4 extract k angle steel joints to being numbered 1,2 ... i ... k, respectively calculate i-th pair angle point in x side To apart from dxiWith on y directions apart from dyi
dxi=xi-xi',
dyi=yi-yi',
Wherein, xiIt is the horizontal coordinate of image F subject to registration, yiIt is the vertical coordinate of image F subject to registration, x'iIt is reference picture The horizontal coordinate of R, y'iIt is the vertical coordinate of reference image R;
(5.2) all angle points are calculated respectively to the mean μ in x directions distancex, variances sigmaxWith the average of y directions distance, μy、 Variances sigmay
Wherein, wherein the scope of i is 0≤i≤k, and k is the sum of extracted angle point pair;
(5.3) quantity of the matching double points according to needed for matching two rectangle frames, arranges thresholding ω, judges i-th pair angle point Distance whether meet following formula, if meet if be considered as match point, be otherwise considered as Mismatching point, be deleted,
|dxix|≤ωσx;I=0,1,2......k;
|dyiy|≤ωσy;I=0,1,2......k,
In this step, the thresholding ω of setting is that the quantity of the matching double points according to needed for matching two rectangle frames is arranged, pin To different images, required match point quantity is also different.For synthetic aperture radar SAR image, due to noise spot it is more, institute To need to extract more angle point, the value that can be arranged to the value of ω in this case between 0.2 to 0.4.For natural image, Due to being subject to influence of noise less, and image outline is clear, the value that can be arranged to the value of ω in this case between 0.8 to 1. For medical image, due to influence of noise can be subject to less, the value of ω can be arranged between 0.6 to 0.8 in this case Value.
Step 6, judges whether matching double points quantity meets demand;
Set angle point amount threshold δ, judges that matching double points quantity, whether more than δ, if greater than δ, then performs next step;It is no Then return to step (5);Here threshold value δ is, according to registering image setting, generally to take the integer between 10 to 15.For Synthetic aperture radar SAR image, because noise spot is more, so needing to extract more angle point, in this case δ is common takes Value scope is between 12 to 15.For natural image, due to being subject to influence of noise less, and image outline is clear, such case The common span of lower δ is between 10 to 12.For medical image, due to influence of noise can be subject to less, δ in this case Common span is between 10 to 12.
Step 7, registration is carried out to similar rectangle frame, records affine transformation parameter.
In this step, affine transformation is carried out to the rectangle frame in image F subject to registration, often converts and once calculate once two The mutual information of similar rectangle frame, and transformation parameter is optimized with genetic algorithm, when the association relationship of two rectangle frames reaches When maximum, the angle of transformation parameter value now, that is, the distance for translating and rotation is recorded, this process is carried out as follows:
(7.1) initial transformation parameter is set for θ=0, tx=0, ty=0, wherein θ are transformed to (x', y') for coordinate (x, y) The angle of rotation, txAnd tyThe respectively distance of the translation in x directions and y directions, (x', y') is that the rectangle frame of image F subject to registration is imitated The coordinate of the rectangle frame penetrated after conversion, (x, y) is the coordinate of the rectangle frame of image F subject to registration;
(7.2) affine transformation is carried out to the rectangle frame in image F subject to registration, is to image subject to registration by equation below The rectangle frame of F is translated and rotated:
Translation formula:
Rotation formula is as follows:
Wherein, (x', y') is the coordinate of the rectangle frame after the rectangle frame affine transformation of image F subject to registration, and (x, y) is to wait to match somebody with somebody The coordinate of the rectangle frame of quasi- image F;
(7.3) mutual trust that the rectangle frame after affine transformation and the rectangle frame of reference picture are carried out in image F subject to registration is calculated Breath, calculation is calculated according to the formula of the association relationship MI (F, R) calculated in step (3.2.4);
(7.4) judge stop condition, if association relationship MI (F, R) is more than 0.9, then stop iteration, and produce optimal solution, It is exactly optimum transformation parameter θ, tx, ty, record this transformation parameter value;Otherwise, execution step (7.5);
(7.5) with genetic algorithm optimization transformation parameter θ, tx, ty
(7.5.1) initial population is produced
According to registering image, affine transformation parameter θ, t are set respectivelyx, tyMaximum and minima, that is, join Number θ, tx, tyScope, the value of 30 θ is randomly generated in the range of the parameter θ of setting, in parameter txIn the range of produce at random Raw 30 txValue and in parameter tyIn the range of the t of 30yValue, takes the value of wherein first θ, first txValue and first ty Value composition an individual, take the value of wherein second θ, second txValue and second tyValue constitute second individuality, By that analogy, 30 are produced by parameter θ, tx, tyThe individuality of composition, this 30 individualities constitute initial population;
(7.5.2) initial population is optimized by selection, intersection and mutation operation:
(7.5.2a) according to the size of ideal adaptation angle value, the big individuality of fitness value, the calculating of fitness value are selected Formula adopts the formula of the association relationship MI (F, R) calculated in step (3.2.4), i.e.,:
Eval=H (IF)+H(IR)-H(IF,IR)
(7.5.2b) individuality of two parents, a certain position for randomly choosing their chromosomes is arbitrarily selected to swap, Obtain two new filial generations, this process can be conducive to finding more excellent than the individuality in initial population with expanded search space Elegant new individual;
(7.5.2c) individuality is randomly choosed, carries out gene mutation operation, this process can increase the multiformity of population, effectively Algorithm is avoided to be absorbed in local optimum;
If (7.5.3) evolutionary generation of population reaches 100 generations or fitness value more than 0.9, stop iteration, produce excellent Transformation parameter θ after change, tx, ty, go to step (7.2);Otherwise turn (7.5.2) and continue iteration.
Step 8, to width image F subject to registration affine transformation is carried out.
According to the affine transformation parameter recorded in previous step, that is, the optimized parameter θ, t of translation and rotation transformationx, ty, translation transformation and rotation transformation are carried out to image F subject to registration, image F1 is obtained,
Translation formula is as follows:
Rotation formula is as follows:
Wherein, (x', y') is the coordinate of the image F1 after image F affine transformations subject to registration, and (x, y) is image F subject to registration Coordinate, θ be coordinate (x, y) be transformed to (x', y') rotation angle, txAnd tyRespectively the translation in x directions and y directions away from From.
Step 9, merges image F1 and reference image R that image F subject to registration is obtained after affine transformation, after obtaining registration Image FR.
Merge image F1 and reference image R that image F subject to registration is obtained after affine transformation in this step, matched somebody with somebody Image FR after standard, the process of fusion is to be compared the gray value of each pixel of two width images, if in two width images Pixel gray value it is identical, then image FR the point pixel value be equal to reference image R pixel value;If in two width images Pixel gray value it is different, then image FR is obtained equal to image F subject to registration in the pixel value of the point after affine transformation Image F1 pixel value.
The effect of the present invention can be further illustrated by following emulation:
The secondary synthetic aperture radar SAR image of one width medical science figure, a width natural image and one is carried out using the inventive method The emulation experiment of registration, by whether can precisely search out phase to medical image, natural image and synthetic aperture radar SAR image Like rectangle frame, if registration evaluates the property of the method for registering images of present method invention with the accurately aspect such as registration result Energy.It is as follows to the analysis of simulation experiment of three kinds of different type images:
Emulation experiment one:Emulation experiment of the present invention one is the image registration that carries out to medical image, and two width of employing are registering Experimental image be the x light images of human brain with the x light images of human brain to 10 degree of right rotation after image, simulation result such as figure Shown in 2, wherein:
Fig. 2 (a) is the x light images of human brain,
Fig. 2 (b) is the image after the x light images of human brain are rotated,
Fig. 2 (c) is the most like rectangle frame in the x light images of human brain, and the Corner Feature in most like rectangle frame,
Fig. 2 (d) rotated for the x light images of human brain after image in most like rectangle frame, and most like square Corner Feature in shape frame,
The x light images of Fig. 2 (e) human brains and the x light images of human brain rotated after image registration result.
By Fig. 2 (c), it can be seen that with the high precision of the similar rectangle frame that the inventive method finds also can be accurate in 2 (d) Extract the angle point of matching.
Fig. 2 (e) shows that the registration result of two width images is very accurate, the x light images of human brain rotated after figure As being merged with the x light images of human brain after accurately translation and rotation transformation.Fusion results almost with the x of human brain Light image can be completely superposed.
Emulation experiment two:Emulation experiment of the present invention two is the image registration that carries out to natural image, and two width of employing are registering Experimental image is the left half of region of the right half of region of the lenna images for intercepting and the lenna images for intercepting, and simulation result is such as Shown in Fig. 3, wherein:
Fig. 3 (a) is the left half of region of lenna images,
Fig. 3 (b) is the right half of region of lenna images,
Fig. 3 (c) is the most like rectangle frame in the left half of region of lenna images, and in most like rectangle frame Corner Feature,
Fig. 3 (d) is the most like rectangle frame in the right half of region of lenna images, and in most like rectangle frame Corner Feature,
Fig. 3 (e) is the left half of region of lenna images and the registration result in the right half of region of lenna images.
By Fig. 3 (c), Fig. 3 (d) as can be seen that the present invention have found left half of region and the lenna images of lenna images Right half of region in a pair of rectangle frames being completely superposed, the left half of region of the lenna images for very accurately having extracted In most like rectangle inframe Corner Feature, the angle point of the nose of personage, lip, shoulder and hair can extract exactly Come, and the angle point extracted with most like rectangle inframe in the right half of region of lenna images is matched completely.
By the left half of region of lenna images is can be seen that in Fig. 3 (e) by accurate translational movement, affine transformation is carried out Merged with the right half of region of lenna images afterwards and almost do not misplaced.
Emulation experiment:Three emulation experiments of the present invention three are the image registration that carries out to synthetic aperture radar SAR image, are adopted Two width registration experimental image be respectively the SAR image that sensors A is photographed and the SAR image that sensor B is photographed, emulation knot Fruit is as shown in figure 4, wherein:
The SAR image that Fig. 4 (a) is photographed for sensors A,
Fig. 4 (b) is the SAR image that sensor B is photographed,
Fig. 4 (c) is the most like rectangle frame in the SAR image that sensors A is photographed, and the angle point in rectangle frame Feature,
Fig. 4 (d) is the most like rectangle frame in the SAR image that sensor B is photographed, and the angle point in rectangle frame Feature,
The registration result of the SAR image that Fig. 4 (e) is photographed for the SAR image that sensors A is photographed with sensor B.
SAR image and the sensor B bats that the sensors A for finding of the invention is photographed can be seen that by Fig. 4 (c), Fig. 4 (d) The similar rectangle frame of the SAR image taken the photograph almost can be completely superposed, the angle point information for sufficiently accurately having extracted, and two Width image have it is rotationally-varying in the case of, the similar rectangle frame that the inventive method finds is still very accurate.
By Fig. 4 (e) as can be seen that carrying out the rotation transformation of accurate translation by the SAR image photographed to sensor B The SAR image for photographing with sensors A afterwards is merged, and its road almost can be completely superposed with the lines in farmland, fusion knot Fruit does not almost misplace.

Claims (4)

1. a kind of based on mutual information and the method for registering images of Harris Corner Detections, comprise the steps:
(1) image F subject to registration and reference image R are input into;
(2) n rectangle frame is arbitrarily placed in image F subject to registration and reference image R, n is obtained to rectangle frame, the size of rectangle frame Arranged according to the similar block size of the two images;
(3) mutual information of the rectangle frame of the rectangle frame and reference image R of image F subject to registration is calculated, and updates excellent with genetic algorithm Change the mutual information, produce a pair of maximum rectangle frames of association relationship as similar rectangle frame;
(4) with Gaussian smoothing window two similar rectangle frames are carried out with smooth operation, then two is extracted with Harris corner detection approach The one-to-one angle point of individual similar rectangle inframe, obtains k angle steel joints pair, and wherein k is greater than being equal to 1 integer;
(5) respectively calculate i-th pair angle point on x directions and y directions apart from dxi, dyi, and all angle points are in x directions and y The mean μ of direction distancexyAnd variances sigmaxy;The quantity of the matching double points according to needed for matching two rectangle frames, arranges thresholding Whether ω, the distance for judging i-th pair angle point meets following formula, and if meeting match point is considered as, and is otherwise considered as Mismatching point, is deleted Remove,
|dxix|≤ωσx;I=0,1,2......k;
|dyiy|≤ωσy;I=0,1,2......k,
Wherein the scope of i is 0≤i≤k, and k is the sum of extracted angle point pair;
(6) according to registering image, set angle point amount threshold δ judges whether matching double points quantity is more than δ, if greater than δ, then perform next step;Otherwise return to step (5);
(7) affine transformation is carried out to the rectangle frame in image F subject to registration, often converts and once calculate once two similar rectangle frames Mutual information, and be optimized with genetic algorithm, when the association relationship of two similar rectangle frames is maximum, record conversion ginseng now The angle of numerical value, that is, the distance for translating and rotation;
(8) according to the transformation parameter value of above-mentioned record, affine transformation is carried out to image F subject to registration, obtains the figure after affine transformation As F1;
(9) the image F1 after affine transformation is merged with reference image R, is obtained the image after registration.
2. according to claim 1 based on mutual information and the method for registering images of Harris Corner Detections, wherein the step Suddenly (3) calculate image rectangle inframe mutual information, carry out as follows:
(2a) joint probability density of two rectangle frames, and the conditional probability density for calculating each rectangle frame respectively are calculated:
p F , R ( DN F , DN R ) = h ( DN F , DN R ) Σ DN F Σ DN R h ( DN F , DN R )
p R ( DN R ) = Σ DN F p F , R ( DN F , DN R )
p F ( DN F ) = Σ DN R p F , R ( DN F , DN R )
Wherein, PF,R(DNF,DNR) represent the joint probability density of the rectangle frame of image F subject to registration and the rectangle frame of reference image R; PR(DNR) represent reference image R rectangle frame conditional probability density;PF(DNF) represent image F subject to registration rectangle frame bar Part probability density;H is the joint grey level histogram of the rectangle frame of the rectangle frame of image F subject to registration and reference image R;h(DNF, DNR) represent that gray value is DN in the rectangle frame of image F subject to registrationF, gray value is DN in the rectangle frame of reference image RRPixel The number of times that combination occurs, the gray-value variation scope of the rectangle inframe for image F subject to registration be (0, m), the square of reference image R The gray-value variation scope of shape inframe be (0, situation n), DNFAnd DNRMeet 0≤DNF≤m,0≤DNR≤ n, m are subject to registration The maximum of the rectangle inframe pixel grey scale of image F, n is the maximum of the rectangle inframe pixel grey scale of reference image R;
(2b) parameter obtained according to step (2a) calculates the association relationship of rectangle inframe:
H F R = Σ DN F Σ DN R - p F , R ( DN F , DN R ) log p F , R ( DN F , DN R )
H F = Σ DN F - p F ( DN F ) log p F ( DN F )
H R = Σ DN R - p R ( DN R ) log p R ( DN R )
MI (F, R)=HF+HR-HFR
Wherein, HF is the entropy of the rectangle inframe of image F subject to registration, and HR is the entropy of the rectangle inframe of reference image R, and HFR is to wait to match somebody with somebody The combination entropy of the rectangle inframe of quasi- image F and reference image R, MI (F, R) is the rectangle frame of image F subject to registration and reference image R Interior association relationship.
3. according to described in claim 1 based on mutual information and the method for registering images of Harris Corner Detections, wherein described The angle point of two similar rectangle inframes is extracted in step (4) with Harris corner detection approach, is carried out as follows:
(4a) three autocorrelation parameter A, the value of B and C are calculated to each pixel:
A = w ( x , y ) ⊗ I x 2 , B = w ( x , y ) ⊗ I y 2 , C = w ( x , y ) ⊗ ( I x , I y )
Wherein, w (x, y) is Gaussian function,σ is variance, x and y be horizontally oriented respectively and The coordinate of vertical direction,It is convolution symbol, IxFor the difference in x directions, IyFor the difference in y directions;
(4b) the Harris angle points response R of each pixel is calculated:
R=(AB-C2)2-k(A+B)2, k values are the Arbitrary Digit between 0.04-0.06;
(4c) judge whether each pixel is angle point, if pixel angle point response R be more than 0.5, be considered as angle point and by its Extract, otherwise, be considered as non-angle point.
4. according to described in claim 1 based on mutual information and the method for registering images of Harris Corner Detections, step (8) institute That what is stated carries out affine transformation to image F subject to registration, is by equation below image F subject to registration to be translated and rotated:
Translation formula:
x ′ y ′ = 1 0 0 1 x y + t x t y ,
Rotation formula is as follows:
x ′ y ′ = c o s θ s i n θ - s i n θ cos θ x y
Wherein, (x', y') is the coordinate of the image F1 after image F affine transformations subject to registration, and (x, y) is the seat of image F subject to registration Mark, θ is that coordinate (x, y) is transformed to the angle that (x', y') rotates, txAnd tyThe respectively distance of the translation in x directions and y directions.
CN201410269698.7A 2014-06-17 2014-06-17 Image registration method based on mutual information and Harris corner point detection Active CN104021559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410269698.7A CN104021559B (en) 2014-06-17 2014-06-17 Image registration method based on mutual information and Harris corner point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410269698.7A CN104021559B (en) 2014-06-17 2014-06-17 Image registration method based on mutual information and Harris corner point detection

Publications (2)

Publication Number Publication Date
CN104021559A CN104021559A (en) 2014-09-03
CN104021559B true CN104021559B (en) 2017-04-19

Family

ID=51438297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410269698.7A Active CN104021559B (en) 2014-06-17 2014-06-17 Image registration method based on mutual information and Harris corner point detection

Country Status (1)

Country Link
CN (1) CN104021559B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881645B (en) * 2015-05-26 2018-09-14 南京通用电器有限公司 The vehicle front mesh object detection method of feature based point mutual information and optical flow method
CN106778899B (en) * 2016-12-30 2019-12-10 西安培华学院 Rapid mutual information image matching method based on statistical correlation
CN106960449B (en) * 2017-03-14 2020-02-14 西安电子科技大学 Heterogeneous registration method based on multi-feature constraint
CN107462875B (en) * 2017-07-25 2020-04-10 西安电子科技大学 Cognitive radar maximum MI (maximum MI) waveform optimization method based on IGA-NP (ensemble-nearest neighbor) algorithm
CN109963070A (en) * 2017-12-26 2019-07-02 富泰华工业(深圳)有限公司 Picture sewing method and system
CN108230375B (en) * 2017-12-27 2022-03-22 南京理工大学 Registration method of visible light image and SAR image based on structural similarity rapid robustness
CN108109112B (en) * 2018-01-16 2021-07-20 上海同岩土木工程科技股份有限公司 Tunnel layout graph splicing parameter processing method based on Sift characteristic
CN109064488B (en) * 2018-07-05 2022-08-09 北方工业大学 Method for matching and tracking specific building in unmanned aerial vehicle video
CN109461140A (en) * 2018-09-29 2019-03-12 沈阳东软医疗系统有限公司 Image processing method and device, equipment and storage medium
CN111311673B (en) * 2018-12-12 2023-11-03 北京京东乾石科技有限公司 Positioning method and device and storage medium
CN110334372B (en) * 2019-04-22 2023-02-03 武汉建工智能技术有限公司 BIM augmented reality simulation method based on drawing registration
CN110290365B (en) * 2019-07-12 2021-02-02 广东技术师范大学天河学院 Edge fusion method
CN112669378A (en) * 2020-12-07 2021-04-16 山东省科学院海洋仪器仪表研究所 Method for rapidly detecting angular points of underwater images of seawater

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251926A (en) * 2008-03-20 2008-08-27 北京航空航天大学 Remote sensing image registration method based on local configuration covariance matrix

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3612756B2 (en) * 1994-11-10 2005-01-19 富士ゼロックス株式会社 Operation device for image input / output device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251926A (en) * 2008-03-20 2008-08-27 北京航空航天大学 Remote sensing image registration method based on local configuration covariance matrix

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Spatial-Feature-Enhanced MMI Algorithm for Multimodal Airborne Image Registration;Xiaofeng Fan等;《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》;20100630;第48卷(第6期);第2580-2589页 *
吕煊.基于角点特征和最大互信息的多模医学图像配准.《中国优秀硕士学位论文全文数据库信息科技辑》.2008,(第08期),摘要,第21-40页. *

Also Published As

Publication number Publication date
CN104021559A (en) 2014-09-03

Similar Documents

Publication Publication Date Title
CN104021559B (en) Image registration method based on mutual information and Harris corner point detection
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
Campo et al. Multimodal stereo vision system: 3D data extraction and algorithm evaluation
CN104008370B (en) A kind of video face identification method
CN105809693B (en) SAR image registration method based on deep neural network
CN105184801B (en) It is a kind of based on multi-level tactful optics and SAR image high-precision method for registering
CN104820997B (en) A kind of method for tracking target based on piecemeal sparse expression Yu HSV Feature Fusion
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
US20150347804A1 (en) Method and system for estimating fingerprint pose
CN101980250A (en) Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field
CN102982334B (en) The sparse disparities acquisition methods of based target edge feature and grey similarity
CN114187665B (en) Multi-person gait recognition method based on human skeleton heat map
CN104751111B (en) Identify the method and system of human body behavior in video
CN105138983B (en) The pedestrian detection method divided based on weighting block model and selective search
CN103955950B (en) Image tracking method utilizing key point feature matching
CN111914761A (en) Thermal infrared face recognition method and system
CN106251362A (en) A kind of sliding window method for tracking target based on fast correlation neighborhood characteristics point and system
CN103679720A (en) Fast image registration method based on wavelet decomposition and Harris corner detection
CN105469111A (en) Small sample set object classification method on basis of improved MFA and transfer learning
CN105631899A (en) Ultrasonic image motion object tracking method based on gray-scale texture feature
CN103761768A (en) Stereo matching method of three-dimensional reconstruction
CN104616280A (en) Image registration method based on maximum stable extreme region and phase coherence
CN104732546A (en) Non-rigid SAR image registration method based on region similarity and local spatial constraint
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant