CN106780297A - Image high registration accuracy method under scene and Varying Illumination - Google Patents

Image high registration accuracy method under scene and Varying Illumination Download PDF

Info

Publication number
CN106780297A
CN106780297A CN201611080132.5A CN201611080132A CN106780297A CN 106780297 A CN106780297 A CN 106780297A CN 201611080132 A CN201611080132 A CN 201611080132A CN 106780297 A CN106780297 A CN 106780297A
Authority
CN
China
Prior art keywords
illumination
camera
image
sigma
conspicuousness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611080132.5A
Other languages
Chinese (zh)
Other versions
CN106780297B (en
Inventor
冯伟
孙济洲
田飞鹏
张乾
周策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201611080132.5A priority Critical patent/CN106780297B/en
Publication of CN106780297A publication Critical patent/CN106780297A/en
Application granted granted Critical
Publication of CN106780297B publication Critical patent/CN106780297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/14
    • G06T5/80

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of image high registration accuracy method under scene and Varying Illumination, step includes:Camera reorientation, the initialization of L and F, illumination correction, the correction of camera geometric position.The present invention proposes multiple dimensioned low-rank conspicuousness detection method, and by with the collaboration conspicuousness priori based on GMM, the detection of multiple dimensioned low-rank conspicuousness is generalized in multiple image collaboration conspicuousness detection, to detect the same or analogous region occurred in multiple image.Compared with traditional conspicuousness detection method, the multiple dimensioned super-pixel blending algorithm of low-rank proposed by the invention solves the problems, such as that scale selection is difficult, and achieves more reliable conspicuousness testing result.

Description

Image high registration accuracy method under scene and Varying Illumination
Technical field
The invention belongs to image registration field, it is related to a kind of towards matching somebody with somebody to high level image high accuracy under scene and illumination variation Quasi- method.The rapid relocation technology of camera is applied to the image high accuracy that scene and illumination all changes and matched somebody with somebody by this method In quasi- problem, with the 1 principal direction image occurred with last time observed image in camera repositioning process and 6 auxiliary directional images pair Present image high registration accuracy is target.
Background technology
The background technology being related in the present invention has:
(1) change detection (Change Detection):Change detection is the important preprocessing step of high-rise vision application. In detection steps and decision rule in change detection, CDNet provides an extensive REF video data set for reality, And good ranking is kept in change detection algorithm.Many algorithms based on most recently newly proposition, such as SOBS, SC_SOBS, Sub SENSE etc., the background modeling in CDNet is one of most successful strategy.The development that other are worth mentioning is included based on change Change the three-dimensional voxel (3Dvoxel) of detection and changed with the urban scale structure of range data using multiple panorama sketch and visit Survey.In order to solve the influence of illumination variation and camera motion, it is aobvious with luminosity that current classic method is conceived to space exploration The change of work, and the implicit delicate change that must be processed as noise.However, occurring in space and light varience not at some On important small yardstick, the hypothesis on this basis largely limits its ability for delicate texture variations detection. Our invention shows that low rank analysis (low-rank analysis) can be used to decompose the sparse change in several pictures.
(2) color continuity (Color Constancy):In change detection, quick color continuity extensively should Use correction illumination variation.Due to the requirement of real-time speed, most of change detection methods can only be connected with simple static state color Continuous property treating method, so limits the ability of the frequent and violent illumination variation of its tolerance.In addition, intrinsic image can For the difference of different illumination under correction Same Scene.Recently, the decomposition of intrinsic image has been further extended various Part, including the shape of decomposition, illumination, reflectivity etc. from one or plurality of pictures.But, these nearest development Or needing complicated majorized function, or needs to capture plurality of pictures under intensive lighting condition, this causes them not Inexpensive, end-to-end, close-grained change detection can be directly applied for.
(3) geometric correction (Geometry Correction):Geometric correction is that another is visited in the change with robustness Indispensable part in survey.Conventional method for rigid scene includes similitude, affine or projective transformation.For non- Rigid dynamic scene, light stream can be used to correct the deviation of the camera solid having dislocation.Many photographs are considered in our invention The condition of express contract beam, employs the SIFT streams of extension to carry out geometric correction.
The content of the invention
Accurately detecting, will to the deficiency of finegrained image change in high level scene for prior art for the present invention Camera reorientation is applied in fine change detection, by obtaining scene, illumination variation under image, there is provided it is a kind of end-to-end The new technology for ensureing change detection robustness and accuracy from coarse to fine, is conducive to further improving to there is delicate fine texture to become Change detectivity.
Therefore, the present invention is adopted the following technical scheme that, mainly comprise the following steps:
1. camera reorientation:
Collect the scene observation sample of different time.7 pictures are gathered in one scenario:1 ambient lighting and 6 are fixed Illuminated to side.These pictures are stored as column vector.
Given X is as follows as above-mentioned 7 the constituted matrixes of illumination:
Wherein, 1≤k≤K, K are the number (taking K=6 here) of side illumination orientations.
Current camera is reoriented to and observes close angles and positions with last time, by observed with last time multiple DSL is adjusted roughly after comparing, and obtains current observing matrix Y:
Camera relocates step:
A () initializes Current camera to rational position so that this position is big enough in comprising real target Region.IcRepresent posture and the position of photo current correspondence camera;
B () keeps a rectangle frame R for bluenessbIn IcCenter, a navigation rectangle frame R for redrRepresent Current camera Posture is poor with the relative geometry of target area.Posture and the position of dynamic adjustment camera are until there is following equation to set up:
Rr=HRb
Wherein H is by IcAnd XELThe unit matrix for calculating.
The initialization of 2.L (illumination difference) and F (camera geometric correction stream):
L represents illumination difference, and F represents the geometric correction stream of camera.
By X, Y for being given in (1), it is assumed that there is image xiAnd yi:Wherein, i=0 interval scales EL images, 1≤i≤K interval scales DSL images.Global linear photometric calibration matrix is obtainedWith an offset vectorFormula:
Wherein,WithDifference representative image xiAnd yiMatching SIFT feature RGB color.(*) is defined specification illumination Model, can solve under the form of closing.
3. it is normal to perceive illumination correction:
Based on Lambertian reflection model (Lambertian Reflectance Model):
Ip=∫ < np, w > ρpL(w)dw
Wherein, Ip、npAnd ρpColor, normality and the albedo of pixel p are represented respectively, and L (w) is grading function.
By to XFMiddle addition virtual optical Lv() corrects its illumination deviation with Current observation Y, obtains equation below:
Wherein,It is the color after pixel p correction,It is to add the color increment produced after virtual luminosity.
Therefore, the luminosity for balancing X and Y with following function is poor, function minimization that will be below:
Wherein, LiIt is that the spatial variations of X and Y under i-th illumination normally perceive illumination difference.The first half of formula is encouragedWith yiLuminosity uniformity.It is variable, wherein 0≤Cp≤ 1 is the possibility of pixel p change.After formula Half part encourages the continuity in virtual optical space, wpqRepresent the similitude of p and q, p~q and represent that p is adjacent with q.
4. camera geometric position correction:
Illumination difference L is perceived according to normali, a new luminosity X being corrected can be obtainedL.By extending SIFT frames Frame, corrects its energy function as follows:
Wherein,It is identical with (3).
So far, by Pictures location, the illumination that can obtain to above-mentioned steps being shot with last time almost complete identical Picture, can carry out detection and find its fine and closely woven texture variations.
Proposed by the invention merges to carry out conspicuousness and cooperate with what conspicuousness was detected by the multiple dimensioned super-pixel of low-rank Technology is mainly included the following steps that:
1) detection of the conspicuousness of single scale;
2) fusion of multiple dimensioned conspicuousness;
3) refinement of conspicuousness;
The present invention achieves more reliable conspicuousness testing result, helps further to improve prior art to conspicuousness inspection The disposal ability of survey.
Advantages and positive effects of the present invention:
The present invention proposes multiple dimensioned low-rank conspicuousness detection method, and by with the collaboration conspicuousness elder generation based on GMM Test, the detection of multiple dimensioned low-rank conspicuousness is generalized in multiple image collaboration conspicuousness detection, occur to detect in multiple image Same or analogous region.Compared with traditional conspicuousness detection method, the multiple dimensioned super-pixel of low-rank proposed by the invention Blending algorithm solves the problems, such as that scale selection is difficult, and achieves more reliable conspicuousness testing result.
Brief description of the drawings
Fig. 1:Image high registration accuracy flow chart
Fig. 2:Summer Palace figure of buddha slight change detection figure
Fig. 3:Camera resets bitmap
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings and by specific embodiment, and following examples are descriptive , it is not limited, it is impossible to which protection scope of the present invention is limited with this.
A kind of image high registration accuracy method under scene and Varying Illumination, step is as follows:
1. camera reorientation:
Collect the scene observation sample of different time.7 pictures are gathered in one scenario:1 ambient lighting and 6 are fixed Illuminated to side.These pictures are stored as column vector.
Given X is as follows as above-mentioned 7 the constituted matrixes of illumination:
Wherein, 1≤k≤K, K are the number (taking K=6 here) of side illumination orientations.
Current camera is reoriented to and observes close angles and positions with last time, by observed with last time multiple DSL is adjusted roughly after comparing, and obtains current observing matrix Y:
Camera relocates step:
A () initializes Current camera to rational position so that this position is big enough in comprising real target Region.IcRepresent posture and the position of photo current correspondence camera;
B () keeps a rectangle frame R for bluenessbIn IcCenter, a navigation rectangle frame R for redrRepresent Current camera Posture is poor with the relative geometry of target area.Posture and the position of dynamic adjustment camera are until there is following equation to set up:
Rr=HRb
Wherein H is by IcAnd XELThe unit matrix for calculating.
The initialization of 2.L (illumination difference) and F (camera geometric correction stream):
L represents illumination difference, and F represents the geometric correction stream of camera.
By X, Y for being given in (1), it is assumed that there is image xiAnd yi:Wherein, i=0 interval scales EL images, 1≤i≤K interval scales DSL images.Global linear photometric calibration matrix is obtainedWith an offset vectorFormula:
Wherein,WithDifference representative image xiAnd yiMatching SIFT feature RGB color.(*) is defined specification illumination Model, can solve under the form of closing.
3. it is normal to perceive illumination correction:
Based on Lambertian reflection model (Lambertian Reflectance Model):
Ip=∫ < np, w > ρpL(w)dw
Wherein, Ip、npAnd ρpColor, normality and the albedo of pixel p are represented respectively, and L (w) is grading function.
By to XFMiddle addition virtual optical Lv() corrects its illumination deviation with Current observation Y, obtains equation below:
Wherein,It is the color after pixel p correction,It is to add the color increment produced after virtual luminosity.
Therefore, the luminosity for balancing X and Y with following function is poor, function minimization that will be below:
Wherein, LiIt is that the spatial variations of X and Y under i-th illumination normally perceive illumination difference.The first half of formula is encouragedWith yiLuminosity uniformity.It is variable, wherein 0≤Cp≤ 1 is the possibility of pixel p change.After formula Half part encourages the continuity in virtual optical space, wpqRepresent the similitude of p and q, p~q and represent that p is adjacent with q.
4. camera geometric position correction:
Illumination difference L is perceived according to normali, a new luminosity X being corrected can be obtainedL.By extending SIFT frames Frame, corrects its energy function as follows:
Wherein,It is identical with (3).
The image significance detection method of low-rank Multiscale Fusion is to be based on low-rank Multiscale Fusion method, will be registered Two good images, with it, so as to the conspicuousness for different zones is detected.
Its operating procedure is as follows:
Step S1:Prepare two images of reference picture and present image, and be loaded into the method.
Step S2:Present image is carried out the initialization of illumination, light stream, is processed by present image and reference picture, Current state hypograph is processed into image of the last state hypograph same light according under, it is preliminary eliminate before and after image irradiation twice Influence.
Step S3:Current state image is carried out into illumination correction, last state hypograph to current state figure below is calculated The transformation matrix of picture, by present image plus change moment matrix, is finally reached the effect of normal illumination correction.
Step S4:Present image is carried out into geometric correction, two side-play amounts of image is calculated, and side-play amount is applied to In present image, the effect of camera geometric correction is finally realized.

Claims (2)

1. a kind of image high registration accuracy method under scene and Varying Illumination, step is as follows:
(1) the scene observation sample of different time is collected, N pictures are gathered in one scenario:1 ambient lighting and N-1 are individual fixed Illuminated to side, these pictures are stored as column vector,
Given X is as follows as the constituted matrix of above-mentioned N number of illumination:
X = [ x E L , x DSL 1 , ... , x DSL k ]
Wherein, 1≤k≤K, K are the number of side illumination orientations,
Current camera is reoriented to and observed close angles and positions with last time, by multiple the DSL ratios observed with last time To rear rough adjustment, current observing matrix Y is obtained:
Y = [ y E L , y DSL 1 , ... , y DSL k ]
(2) the initialization of L and F:
L represents illumination difference, and F represents the geometric correction stream of camera,
By X, Y for being given in (1), it is assumed that there is image xiAnd yi:Wherein, i=0 interval scales EL images, 1≤i≤K interval scales DSL figures Picture, has obtained global linear photometric calibration matrixWith an offset vectorFormula:
[ A ^ i , b ^ i ] = arg m i n A i , b i | | A ^ i x ~ i + b i - y ~ i | | F 2 , ( * )
Wherein,WithDifference representative image xiAnd yiMatching SIFT feature RGB color, (*) is defined specification illumination mould Type, can solve under the form of closing;
(3) it is normal to perceive illumination correction:
Based on Lambertian reflection model:
Ip=∫ < np, w > ρpL(w)dw
Wherein, Ip、npAnd ρpColor, normality and the albedo of pixel p are represented respectively, and L (w) is grading function,
By to XFMiddle addition virtual optical Lv() corrects its illumination deviation with Current observation Y, obtains equation below:
X LF p = &Integral; < n p , w > &rho; p ( L x ( w ) + L v ( w ) ) d w ,
X LF p = x F p + L p v = y p ,
Wherein,It is the color after pixel p correction,It is to add the color increment produced after virtual luminosity,
The luminosity for balancing X and Y with following function is poor, function minimization that will be below:
L i = arg m i n L v &Sigma; p ( x F p i + L p v - y p i ) 2 exp ( - C p &sigma; ) + &alpha;&Sigma; p ~ q w p q ( L p v - L q v ) 2
Wherein, LiIt is that the spatial variations of X and Y under i-th illumination normally perceive illumination difference, the first half of formula is encouragedWith yiLuminosity uniformity,It is variable, wherein 0≤Cp≤ 1 be pixel p change possibility, formula it is latter half of Divide the continuity for encouraging virtual optical space, wpqRepresent the similitude of p and q, p~q and represent that p is adjacent with q,
(4) camera geometric position correction:
Illumination difference L is perceived according to normali, a new luminosity X being corrected can be obtainedL, by extending SIFT frameworks, repair Just its energy function is as follows:
E ( F ) = &Sigma; i , p | | x L i ( p + F p ) - y i ( p ) | | 1 exp ( - C p &sigma; ) + &beta;&Sigma; p | | F p | | 2 2 + &Sigma; p ~ q m i n ( &gamma; | | F p - F q | | 1 , d )
Wherein,It is identical with (3),
Pictures location, the identical picture of illumination for obtaining being shot with last time by above-mentioned steps, carry out detection and find that its is thin Close texture variations.
2. the image high registration accuracy method under scene according to claim 1 and Varying Illumination, it is characterised in that: Described camera reorientation step:
A () initializes Current camera to rational position so that this position is big enough in comprising real target area, IcRepresent posture and the position of photo current correspondence camera;
B () keeps a rectangle frame R for bluenessbIn IcCenter, a navigation rectangle frame R for redrRepresent Current camera posture Relative geometry with target area is poor, and posture and the position of dynamic adjustment camera are until there is following equation to set up:
Rr=HRb
Wherein H is by IcAnd XELThe unit matrix for calculating.
CN201611080132.5A 2016-11-30 2016-11-30 Image high registration accuracy method under scene and Varying Illumination Active CN106780297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611080132.5A CN106780297B (en) 2016-11-30 2016-11-30 Image high registration accuracy method under scene and Varying Illumination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611080132.5A CN106780297B (en) 2016-11-30 2016-11-30 Image high registration accuracy method under scene and Varying Illumination

Publications (2)

Publication Number Publication Date
CN106780297A true CN106780297A (en) 2017-05-31
CN106780297B CN106780297B (en) 2019-10-25

Family

ID=58901275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611080132.5A Active CN106780297B (en) 2016-11-30 2016-11-30 Image high registration accuracy method under scene and Varying Illumination

Country Status (1)

Country Link
CN (1) CN106780297B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108269276A (en) * 2017-12-22 2018-07-10 天津大学 One kind carries out scene based on image registration and is slightly variable detection method
CN109579731A (en) * 2018-11-28 2019-04-05 华中科技大学 A method of executing 3 d surface topography measurement based on image co-registration
CN110442153A (en) * 2019-07-10 2019-11-12 佛山科学技术学院 A kind of passive optical is dynamic to catch system video cameras Corrective control method and system
CN110622213A (en) * 2018-02-09 2019-12-27 百度时代网络技术(北京)有限公司 System and method for depth localization and segmentation using 3D semantic maps
CN110780743A (en) * 2019-11-05 2020-02-11 聚好看科技股份有限公司 VR (virtual reality) interaction method and VR equipment
CN110827193A (en) * 2019-10-21 2020-02-21 国家广播电视总局广播电视规划院 Panoramic video saliency detection method based on multi-channel features
CN111882616A (en) * 2020-09-28 2020-11-03 李斯特技术中心(上海)有限公司 Method, device and system for correcting target detection result, electronic equipment and storage medium
CN112070831A (en) * 2020-08-06 2020-12-11 天津大学 Active camera repositioning method based on multi-plane joint pose estimation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5483379A (en) * 1991-05-14 1996-01-09 Svanberg; Sune Image registering in color at low light intensity
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN104933392A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Probabilistic people tracking using multi-view integration
CN105934902A (en) * 2013-11-27 2016-09-07 奇跃公司 Virtual and augmented reality systems and methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5483379A (en) * 1991-05-14 1996-01-09 Svanberg; Sune Image registering in color at low light intensity
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN105934902A (en) * 2013-11-27 2016-09-07 奇跃公司 Virtual and augmented reality systems and methods
CN104933392A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Probabilistic people tracking using multi-view integration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王云飞 等: "基于SIFT 匹配的两视点坐标映射模型应用", 《信息技术》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108269276A (en) * 2017-12-22 2018-07-10 天津大学 One kind carries out scene based on image registration and is slightly variable detection method
CN110622213A (en) * 2018-02-09 2019-12-27 百度时代网络技术(北京)有限公司 System and method for depth localization and segmentation using 3D semantic maps
CN110622213B (en) * 2018-02-09 2022-11-15 百度时代网络技术(北京)有限公司 System and method for depth localization and segmentation using 3D semantic maps
CN109579731A (en) * 2018-11-28 2019-04-05 华中科技大学 A method of executing 3 d surface topography measurement based on image co-registration
CN110442153A (en) * 2019-07-10 2019-11-12 佛山科学技术学院 A kind of passive optical is dynamic to catch system video cameras Corrective control method and system
CN110442153B (en) * 2019-07-10 2022-03-25 佛山科学技术学院 Camera correction control method and system for passive optical dynamic capturing system
CN110827193A (en) * 2019-10-21 2020-02-21 国家广播电视总局广播电视规划院 Panoramic video saliency detection method based on multi-channel features
CN110827193B (en) * 2019-10-21 2023-05-09 国家广播电视总局广播电视规划院 Panoramic video significance detection method based on multichannel characteristics
CN110780743A (en) * 2019-11-05 2020-02-11 聚好看科技股份有限公司 VR (virtual reality) interaction method and VR equipment
CN112070831A (en) * 2020-08-06 2020-12-11 天津大学 Active camera repositioning method based on multi-plane joint pose estimation
CN111882616A (en) * 2020-09-28 2020-11-03 李斯特技术中心(上海)有限公司 Method, device and system for correcting target detection result, electronic equipment and storage medium
CN111882616B (en) * 2020-09-28 2021-06-18 李斯特技术中心(上海)有限公司 Method, device and system for correcting target detection result, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106780297B (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN106780297A (en) Image high registration accuracy method under scene and Varying Illumination
US9811946B1 (en) High resolution (HR) panorama generation without ghosting artifacts using multiple HR images mapped to a low resolution 360-degree image
Ackermann et al. Photometric stereo for outdoor webcams
CN107909640B (en) Face relighting method and device based on deep learning
CN104392435B (en) Fisheye camera scaling method and caliberating device
Sinha et al. Pan–tilt–zoom camera calibration and high-resolution mosaic generation
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN109559355B (en) Multi-camera global calibration device and method without public view field based on camera set
CN105261022B (en) A kind of pcb board card matching process and device based on outer profile
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
CN104252626B (en) Semi-supervision method for training multi-pattern recognition and registration tool model
CN106327532A (en) Three-dimensional registering method for single image
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
US20090073259A1 (en) Imaging system and method
EP3549094A1 (en) Method and system for creating images
CN103093458B (en) The detection method of key frame and device
CN107154014A (en) A kind of real-time color and depth Panorama Mosaic method
CN108572181A (en) A kind of mobile phone bend glass defect inspection method based on streak reflex
CN112348775B (en) Vehicle-mounted looking-around-based pavement pit detection system and method
CN108305277A (en) A kind of heterologous image matching method based on straightway
CN107038714A (en) Many types of visual sensing synergistic target tracking method
KR101803340B1 (en) Visual odometry system and method
CN110517211A (en) A kind of image interfusion method based on gradient domain mapping
CN109785429A (en) A kind of method and apparatus of three-dimensional reconstruction
CN112465977A (en) Method for repairing three-dimensional model water surface loophole based on dense point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant