CN106780297B - Image high registration accuracy method under scene and Varying Illumination - Google Patents

Image high registration accuracy method under scene and Varying Illumination Download PDF

Info

Publication number
CN106780297B
CN106780297B CN201611080132.5A CN201611080132A CN106780297B CN 106780297 B CN106780297 B CN 106780297B CN 201611080132 A CN201611080132 A CN 201611080132A CN 106780297 B CN106780297 B CN 106780297B
Authority
CN
China
Prior art keywords
illumination
camera
image
conspicuousness
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611080132.5A
Other languages
Chinese (zh)
Other versions
CN106780297A (en
Inventor
冯伟
孙济洲
田飞鹏
张乾
周策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201611080132.5A priority Critical patent/CN106780297B/en
Publication of CN106780297A publication Critical patent/CN106780297A/en
Application granted granted Critical
Publication of CN106780297B publication Critical patent/CN106780297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of image high registration accuracy method under scene and Varying Illumination, step includes: the initialization of camera reorientation, L and F, illumination correction, the correction of camera geometric position.The invention proposes multiple dimensioned low-rank conspicuousness detection methods, and by using the collaboration conspicuousness priori based on GMM, the detection of multiple dimensioned low-rank conspicuousness is generalized in multiple image collaboration conspicuousness detection, to detect the same or similar region occurred in multiple image.Compared with traditional conspicuousness detection method, the multiple dimensioned super-pixel blending algorithm of low-rank proposed by the invention solves the problems, such as scale selection difficulty, and achieves more reliable conspicuousness testing result.

Description

Image high registration accuracy method under scene and Varying Illumination
Technical field
The invention belongs to image registration field, it is related to a kind of towards matching under scene and illumination variation to high level image high-precision Quasi- method.This method matches the rapid relocation technical application of camera to scene and all changed image high-precision of illumination In quasi- problem, with the 1 principal direction image occurred in camera repositioning process with last time observed image and 6 auxiliary directional images pair Present image high registration accuracy is target.
Background technique
The background technique being related in the present invention has:
(1) change detection (Change Detection): change detection is the important preprocessing step of high-rise vision application. In detection steps and decision rule in change detection, CDNet provides a real extensive REF video data set, And good ranking is kept in change detection algorithm.Many algorithms based on most recently newly proposition, such as SOBS, SC_SOBS, Sub SENSE etc., the background modeling in CDNet are one of most successful strategies.Other development being worth mentioning include based on change Change the three-dimensional voxel (3Dvoxel) of detection and changed using the urban scale structure of multiple panorama sketch and range data and is visited It surveys.In order to solve the influence of illumination variation and camera motion, current classic method, which is conceived on space exploration and luminosity, to be shown The variation of work, and the implicit delicate variation that must be handled as noise.However, it is some occur space and light varience not On important small scale, the hypothesis on this basis largely limits it for the ability of delicate texture variations detection. Our invention shows that low rank analysis (low-rank analysis) can be used to decompose the sparse variation in several pictures.
(2) color continuity (Color Constancy): in change detection, quick color continuity is answered extensively Use correction illumination variation.Due to the requirement of real-time speed, most of change detection methods can only be connected with simple static color Continuous property treating method, limits its ability for tolerating frequent and violent illumination variation in this way.In addition to this, intrinsic image can To be used to correct the difference of different illumination under Same Scene.Recently, the decomposition of intrinsic image has been further extended a variety of Component part, including shape, illumination, the reflectivity etc. decomposed from one or plurality of pictures.But these nearest development Perhaps it needs complicated majorized function or needs to capture plurality of pictures under intensive lighting condition, this makes them not It can be directly applied for inexpensive, end-to-end, close-grained change detection.
(3) geometric correction (Geometry Correction): geometric correction is that another is visited in the variation with robustness Indispensable component part in survey.Conventional method for rigid scene includes similitude, affine or projective transformation.For non- The dynamic scene of rigidity, light stream can be used to correct the deviation for the camera solid having dislocation.Our invention in view of mostly according to The condition of express contract beam carries out geometric correction using the SIFT stream of extension.
Summary of the invention
The present invention is directed to the prior art, and accurately detecting, will to the deficiency of finegrained image change in high level scene Camera reorientation is applied in fine change detection, by obtaining the image under scene, illumination variation, is provided a kind of end-to-end The new technology for guaranteeing change detection robustness and accuracy from thick to thin is conducive to further increase to there is delicate fine texture to become Change detectivity.
For this purpose, the present invention adopts the following technical scheme that, mainly include the following steps:
1. camera relocates:
Collect the scene observation sample of different time.Acquire 7 pictures in one scenario: 1 ambient lighting and 6 it is fixed It is illuminated to side.These pictures are stored as column vector.
Given X is as follows as matrix composed by above-mentioned 7 illuminations:
Wherein, 1≤k≤K, K are the number (taking K=6 here) of side illumination orientations.
Current camera is reoriented to and observed similar angles and positions with last time, passes through multiple observed with last time DSL is adjusted roughly after comparing, and obtains current observing matrix Y:
Camera relocates step:
(a) initialization Current camera is to reasonable position, so that this position is big enough in comprising real target Region.IcRepresent posture and position that current image corresponds to camera;
(b) a blue rectangle frame R is keptbIn IcCenter, a red navigation rectangle frame RrRepresent Current camera The opposite geometry of posture and target area is poor.The posture of dynamic adjustment camera and position are until there is following equation to set up:
Rr=HRb
Wherein H is by IcAnd XELCalculated unit matrix.
The initialization of 2.L (illumination difference) and F (camera geometric correction stream):
L represents illumination difference, and F represents the geometric correction stream of camera.
By X, the Y provided in (1), it is assumed that there is image xiAnd yi: where represent EL image when i=0, when 1≤i≤K represents DSL image.Global linear photometric calibration matrix is obtainedWith an offset vectorFormula:
Wherein,WithRespectively represent image xiAnd yiMatching SIFT feature RGB color.Specification light subject to (*) According to model, can be solved under closed form.
3. normal perception illumination correction:
Based on Lambertian reflection model (Lambertian Reflectance Model):
Ip=∫ < np, w > ρpL(w)dw
Wherein, Ip、npAnd ρpThe color, normality and albedo of pixel p are respectively represented, L (w) is grading function.
By giving XFMiddle addition virtual optical Lv() corrects the illumination deviation of itself and Current observation Y, obtains following formula:
Wherein,It is the color after pixel p correction,It is that the color increment generated after virtual luminosity is added.
Therefore, the luminosity with following function balance X and Y is poor, i.e., by following function minimization:
Wherein, LiIt is that the spatial variations of X and Y under i-th of illumination normally perceive illumination difference.The first half of formula is encouragedWith yiLuminosity consistency.Be it is variable, wherein 0≤Cp≤ 1 is a possibility that pixel p changes.After formula Half part encourages the continuity in virtual optical space, wpqRepresent the similitude of p and q, p~q indicates that p is adjacent with q.
4. camera geometric position corrects:
According to normal perception illumination difference Li, the available one new luminosity X being correctedL.By extending SIFT frame It is as follows to correct its energy function for frame:
Wherein,It is identical with (3).
So far, pass through Pictures location to the available shooting with last time of above-mentioned steps, illumination almost identical Picture can carry out detection and find its fine and closely woven texture variations.
Proposed by the invention to carry out conspicuousness and cooperates with conspicuousness to detect by the multiple dimensioned super-pixel fusion of low-rank Technology mainly comprises the steps that
1) detection of the conspicuousness of single scale;
2) fusion of multiple dimensioned conspicuousness;
3) refinement of conspicuousness;
The present invention achieves more reliable conspicuousness testing result, helps to further increase the prior art to conspicuousness inspection The processing capacity of survey.
The advantages and positive effects of the present invention:
The invention proposes multiple dimensioned low-rank conspicuousness detection methods, and by first with the collaboration conspicuousness based on GMM It tests, the detection of multiple dimensioned low-rank conspicuousness is generalized in multiple image collaboration conspicuousness detection, is occurred to detect in multiple image The same or similar region.Compared with traditional conspicuousness detection method, the multiple dimensioned super-pixel of low-rank proposed by the invention Blending algorithm solves the problems, such as scale selection difficulty, and achieves more reliable conspicuousness testing result.
Detailed description of the invention
Fig. 1: image high registration accuracy flow chart
Fig. 2: Summer Palace figure of buddha slight change detection figure
Fig. 3: camera resets bitmap
Specific embodiment
The invention will be further described with reference to the accompanying drawing and by specific embodiment, and following embodiment is descriptive , it is not restrictive, this does not limit the scope of protection of the present invention.
A kind of image high registration accuracy method under scene and Varying Illumination, steps are as follows:
1. camera relocates:
Collect the scene observation sample of different time.Acquire 7 pictures in one scenario: 1 ambient lighting and 6 it is fixed It is illuminated to side.These pictures are stored as column vector.
Given X is as follows as matrix composed by above-mentioned 7 illuminations:
Wherein, 1≤k≤K, K are the number (taking K=6 here) of side illumination orientations.
Current camera is reoriented to and observed similar angles and positions with last time, passes through multiple observed with last time DSL is adjusted roughly after comparing, and obtains current observing matrix Y:
Camera relocates step:
(a) initialization Current camera is to reasonable position, so that this position is big enough in comprising real target Region.IcRepresent posture and position that current image corresponds to camera;
(b) a blue rectangle frame R is keptbIn IcCenter, a red navigation rectangle frame RrRepresent Current camera The opposite geometry of posture and target area is poor.The posture of dynamic adjustment camera and position are until there is following equation to set up:
Rr=HRb
Wherein H is by IcAnd XELCalculated unit matrix.
The initialization of 2.L (illumination difference) and F (camera geometric correction stream):
L represents illumination difference, and F represents the geometric correction stream of camera.
By X, the Y provided in (1), it is assumed that there is image xiAnd yi: where represent EL image when i=0, when 1≤i≤K represents DSL image.Global linear photometric calibration matrix is obtainedWith an offset vectorFormula:
Wherein,WithRespectively represent image xiAnd yiMatching SIFT feature RGB color.Specification light subject to (*) According to model, can be solved under closed form.
3. normal perception illumination correction:
Based on Lambertian reflection model (Lambertian Reflectance Model):
Ip=∫ < np, w > ρpL(w)dw
Wherein, Ip、npAnd ρpThe color, normality and albedo of pixel p are respectively represented, L (w) is grading function.
By giving XFMiddle addition virtual optical Lv() corrects the illumination deviation of itself and Current observation Y, obtains following formula:
Wherein,It is the color after pixel p correction,It is that the color increment generated after virtual luminosity is added.
Therefore, the luminosity with following function balance X and Y is poor, i.e., by following function minimization:
Wherein, LiIt is that the spatial variations of X and Y under i-th of illumination normally perceive illumination difference.The first half of formula is encouragedWith yiLuminosity consistency.Be it is variable, wherein 0≤Cp≤ 1 is a possibility that pixel p changes.After formula Half part encourages the continuity in virtual optical space, wpqRepresent the similitude of p and q, p~q indicates that p is adjacent with q.
4. camera geometric position corrects:
According to normal perception illumination difference Li, the available one new luminosity X being correctedL.By extending SIFT frame It is as follows to correct its energy function for frame:
Wherein,It is identical with (3).
The image significance detection method of low-rank Multiscale Fusion is to be based on low-rank Multiscale Fusion method, will be registered Two good images, in this way, to be detected for the conspicuousness of different zones.
Its operating procedure is as follows:
Step S1: preparing reference picture and present image two opens image, and is loaded into this method.
Present image: being carried out the initialization of illumination, light stream by step S2, by handling present image and reference picture, Image under current state is processed into image of the image same light according under under last state, it is preliminary to eliminate front and back image irradiation twice Influence.
Step S3: carrying out illumination correction for current state image, calculates under last state image to the current state following figure The transformation matrix of picture is finally reached the effect of normal illumination correction by present image plus variation moment matrix.
Step S4: carrying out geometric correction for present image, calculates the offset of two images, and offset is applied to In present image, the final effect for realizing camera geometric correction.

Claims (2)

1. a kind of image high registration accuracy method under scene and Varying Illumination, steps are as follows:
(1) the scene observation sample for collecting different time, acquire N picture in one scenario: 1 ambient lighting and N-1 are a fixed It being illuminated to side, these pictures are stored as column vector,
Given X is as follows as matrix composed by above-mentioned N number of illumination:
Wherein, 1≤k≤K, K are the number of side illumination orientations,
Current camera is reoriented to and observed similar angles and positions with last time, passes through multiple the DSL ratios observed with last time To rear rough adjustment, current observing matrix Y is obtained:
(2) the initialization of L and F:
L represents illumination difference, and F represents the geometric correction stream of camera,
By X, the Y provided in (1), it is assumed that there is image xiAnd yi: where represent EL image when i=0, when 1≤i≤K represents DSL figure Picture has obtained global linear photometric calibration matrixWith an offset vectorFormula:
Wherein,WithRespectively represent image xiAnd yiMatching SIFT feature RGB color, specification illumination mould subject to (*) Type can solve under closed form;
(3) normal perception illumination correction:
Based on Lambertian reflection model:
Ip=∫ < np,w>ρpL(w)dw
Wherein, Ip、npAnd ρpThe color, normality and albedo of pixel p are respectively represented, L (w) is grading function, w represents spherical coordinates A direction under system,
By giving XFMiddle addition virtual optical Lv() corrects the illumination deviation of itself and Current observation Y, obtains following formula:
Wherein,It is the color after pixel p correction,It is that the color increment generated after virtual luminosity, L is addedx(w) figure is indicated As the intensity of illumination under the direction w that X sees,Indicate the color of p-th of pixel in X after the correction of camera pose,
Luminosity with following function balance X and Y is poor, i.e., by following function minimization:
Wherein, LiIt is that the spatial variations of X and Y under i-th of illumination normally perceive illumination difference, the first half of formula is encouragedWith yiLuminosity consistency,Be it is variable, wherein 0≤Cp≤ 1 be pixel p variation a possibility that, formula it is latter half of Divide the continuity for encouraging virtual optical space, wpqRepresent the similitude of p and q, p~q indicates that p is adjacent with q, q expression pixel p front-right Or the adjacent pixel of underface,
(4) camera geometric position corrects:
According to normal perception illumination difference Li, the available one new luminosity X being correctedL, by extending SIFT frame, repair Just its energy function is as follows:
Wherein,It is identical with (3),
The Pictures location shot with last time, the identical picture of illumination are obtained through the above steps, and carrying out detection discovery, it is thin Close texture variations.
2. the image high registration accuracy method under scene according to claim 1 and Varying Illumination, it is characterised in that: The camera relocates step:
(a) initialization Current camera is to reasonable position, so that this position is big enough in comprising real target area, IcRepresent posture and position that current image corresponds to camera;
(b) a blue rectangle frame R is keptbIn IcCenter, a red navigation rectangle frame RrRepresent Current camera posture Opposite geometry is poor with target area, and the posture of dynamic adjustment camera and position are until there is following equation to set up:
Rr=HRb
Wherein H is by IcAnd XELCalculated unit matrix.
CN201611080132.5A 2016-11-30 2016-11-30 Image high registration accuracy method under scene and Varying Illumination Active CN106780297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611080132.5A CN106780297B (en) 2016-11-30 2016-11-30 Image high registration accuracy method under scene and Varying Illumination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611080132.5A CN106780297B (en) 2016-11-30 2016-11-30 Image high registration accuracy method under scene and Varying Illumination

Publications (2)

Publication Number Publication Date
CN106780297A CN106780297A (en) 2017-05-31
CN106780297B true CN106780297B (en) 2019-10-25

Family

ID=58901275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611080132.5A Active CN106780297B (en) 2016-11-30 2016-11-30 Image high registration accuracy method under scene and Varying Illumination

Country Status (1)

Country Link
CN (1) CN106780297B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108269276A (en) * 2017-12-22 2018-07-10 天津大学 One kind carries out scene based on image registration and is slightly variable detection method
CN110622213B (en) * 2018-02-09 2022-11-15 百度时代网络技术(北京)有限公司 System and method for depth localization and segmentation using 3D semantic maps
CN109579731B (en) * 2018-11-28 2019-12-24 华中科技大学 Method for performing three-dimensional surface topography measurement based on image fusion
CN110442153B (en) * 2019-07-10 2022-03-25 佛山科学技术学院 Camera correction control method and system for passive optical dynamic capturing system
CN110827193B (en) * 2019-10-21 2023-05-09 国家广播电视总局广播电视规划院 Panoramic video significance detection method based on multichannel characteristics
CN110780743A (en) * 2019-11-05 2020-02-11 聚好看科技股份有限公司 VR (virtual reality) interaction method and VR equipment
CN112070831B (en) * 2020-08-06 2022-09-06 天津大学 Active camera repositioning method based on multi-plane joint pose estimation
CN111882616B (en) * 2020-09-28 2021-06-18 李斯特技术中心(上海)有限公司 Method, device and system for correcting target detection result, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5483379A (en) * 1991-05-14 1996-01-09 Svanberg; Sune Image registering in color at low light intensity
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN104933392A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Probabilistic people tracking using multi-view integration
CN105934902A (en) * 2013-11-27 2016-09-07 奇跃公司 Virtual and augmented reality systems and methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5483379A (en) * 1991-05-14 1996-01-09 Svanberg; Sune Image registering in color at low light intensity
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN105934902A (en) * 2013-11-27 2016-09-07 奇跃公司 Virtual and augmented reality systems and methods
CN104933392A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Probabilistic people tracking using multi-view integration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于SIFT 匹配的两视点坐标映射模型应用;王云飞 等;《信息技术》;20141231;第87-91页 *

Also Published As

Publication number Publication date
CN106780297A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106780297B (en) Image high registration accuracy method under scene and Varying Illumination
Ackermann et al. Photometric stereo for outdoor webcams
US20230154105A1 (en) System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN105279372B (en) A kind of method and apparatus of determining depth of building
CN107635129B (en) Three-dimensional trinocular camera device and depth fusion method
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN107154014B (en) Real-time color and depth panoramic image splicing method
WO2010032792A1 (en) Three-dimensional measurement apparatus and method thereof
CN109559355B (en) Multi-camera global calibration device and method without public view field based on camera set
CN103886107B (en) Robot localization and map structuring system based on ceiling image information
KR20210066031A (en) Improved camera calibration system, target, and process
EP2104365A1 (en) Method and apparatus for rapid three-dimensional restoration
CN107103589A (en) A kind of highlight area restorative procedure based on light field image
CN110728671A (en) Dense reconstruction method of texture-free scene based on vision
CN108572181A (en) A kind of mobile phone bend glass defect inspection method based on streak reflex
CN107038714A (en) Many types of visual sensing synergistic target tracking method
CN113393439A (en) Forging defect detection method based on deep learning
CN110487214A (en) A kind of detection system and its detection method of the product qualification rate combined based on photometric stereo and structured light technique
CN110378995B (en) Method for three-dimensional space modeling by using projection characteristics
CN107680035A (en) A kind of parameter calibration method and device, server and readable storage medium storing program for executing
CN108010071B (en) System and method for measuring brightness distribution by using 3D depth measurement
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN114241059B (en) Synchronous calibration method for camera and light source in photometric stereo vision system
CN110060212A (en) A kind of multispectral photometric stereo surface normal restoration methods based on deep learning
CN113034590B (en) AUV dynamic docking positioning method based on visual fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant