CN106023077A - Dynamic analysis and splicing method for images - Google Patents

Dynamic analysis and splicing method for images Download PDF

Info

Publication number
CN106023077A
CN106023077A CN201610329109.9A CN201610329109A CN106023077A CN 106023077 A CN106023077 A CN 106023077A CN 201610329109 A CN201610329109 A CN 201610329109A CN 106023077 A CN106023077 A CN 106023077A
Authority
CN
China
Prior art keywords
image
point
picture
matrix
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610329109.9A
Other languages
Chinese (zh)
Inventor
龙永生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shenzhoulong Information Service Co Ltd
Original Assignee
Shenzhen Shenzhoulong Information Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shenzhoulong Information Service Co Ltd filed Critical Shenzhen Shenzhoulong Information Service Co Ltd
Priority to CN201610329109.9A priority Critical patent/CN106023077A/en
Publication of CN106023077A publication Critical patent/CN106023077A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images

Abstract

The invention discloses a dynamic analysis and splicing method for images, and the method comprises the steps: image preprocessing, image registration, estimation of a gray scale image conversion formula, image luminosity registration, and image fusion. The method achieves the robust and accurate splicing of images which are taken by a common camera and are not consistent in brightness and color.

Description

Joining method dynamically analyzed by a kind of picture
Technical field
The present invention relates to image identification technical field, specifically, relate to a kind of picture and dynamically analyze joining method.
Background technology
Along with the fast development of information age, the application of picture splicing has been deep into each and every one field, in image remote sensing, fortune The fields such as dynamic analysis, compression of digital video are widely applied.In the field of taking photo by plane in order to expand the visual field, improve resolution, Obtain more fully information, higher degree of accuracy, need two width or several pictures from different remote sensors is spliced into a width Picture.Due to complexity, specific aim and the multiformity of this technology, the intervention of certain factor is likely to cause the huge difference of result Different, the evaluation criterion of result is also varied with each individual.Conventional pictures is spliced owing to distinct device is differently configured and pickup light environment Etc. factor, although two pictures have lap, to there is brightness and color exist larger difference two pictures splicing also It is to there is bigger defect.
Drawbacks described above, is worth solving.
Summary of the invention
In order to overcome the deficiency of existing technology, the present invention provides a kind of picture dynamically to analyze joining method.
Technical solution of the present invention is as described below:
Joining method dynamically analyzed by a kind of picture, it is characterised in that comprise the following steps:
S1: Image semantic classification;
S2: image registration;
S3: estimation gray level image transformation for mula;
S4: image photometric registration;
S5: image co-registration.
Further, described step S1, particularly as follows: the coloured image photographed is converted to gray level image, utilizes mean value method Estimate the luminance background of image and deduct to eliminate the impact of exposure, difference in brightness at gray level image.
Further, described step S2 particularly as follows:
S21: utilize Harris algorithm to extract image angle point, concrete, first try to achieve the real symmetric matrix M of image each point
M = Σ x , y w ( x , y ) I x 2 I x I y I x I y I y 2
Wherein, x, y are the point of target image template, IxAnd IyFor image I single order local derviation both horizontally and vertically, and w (x, y) It is dimensional Gaussian smooth function,
Recycling M calculates the angle point receptance function R of corresponding each pixel, and the every bit of image is solved R, and it is in a certain threshold value On and obtain the point of local maximum i.e. as angle point,
R=detM-k (traceM)2, k=0.04-0.2
Wherein, det (M) is the determinant of matrix M, and trace (M) is the mark of matrix M, and k is constant;
S22: use normalization method of correlation to carry out corners Matching,
N C C = Σ i ( I 1 ( x i , y i ) - u 1 ) ( I 2 ( x i , y i ) - u 2 ) Σ i ( I 1 ( x i , y i ) - u 1 ) 2 Σ i ( I 2 ( x i , y i ) - u 2 ) 2
Wherein, u is the meansigma methods of template pixel, and NCC is the cross correlation of two width images;
S23: purify match point with RANSAC algorithm, to remove mispairing.
Further, described step S3 meets perspective transform relation particularly as follows: camera moves the image of shooting around approximation fixing point, Incidence formula between image is as follows:
X i ′ = x i ′ y i ′ w i ′ = H · X i = h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 · x i y i w i
Least square solution H, and it is continuing with Levenberg-Marquardt algorithm H is carried out nonlinear optimization, wherein xiFor image transverse screen amount of pixels, yiFor perpendicular screen amount of pixels, wiFor picture element density, matrix algebra screen sampled point pixel purifies Value.
Further, described step S4 particularly as follows:
Separately being considered by three color channels of red, green, blue, in each Color Channel, different exposure parameters are white flat from different The linear transformation on the impact following formula of gradation of image and color of weighing represents:
R 2 G 2 B 2 = c r 0 0 0 c g 0 0 0 c b · R 1 G 1 B 1 + d r d g d b
In above formula, (c, is d) linear transformation parameter, and the H first the second width imagery exploitation above calculated during calculating projects Conversion is alignd to piece image so that be mapped, so at each pixel just in two width image pixel point geometry Two groups of RGB numerical value can be obtained: (R2, G2, B2) and (R1, G1, B1).
Further, described pixel is the point of proximity around the angle point and angle point mated.
Further, described step S5 particularly as follows:
S51: according to above calculate 6 i.e. R1 of parameter of 9 parameter matrix H and 3 color channels, R2, G1, G1, B1, B2, carries out the unification in alignment geometrically and luminosity by image;
S52: image is carried out cylindrical surface projecting, and uses linear interpolation algorithm to be synthesized to obtain final splicing at intersection Image.
According to the present invention of such scheme, it has the beneficial effects that, the present invention uses RANSAC algorithm to sit with image corner location It is denoted as object to remove singular point, provides guarantee for geometric registration of imagery;MSAC algorithm is with each Color Channel pair of image The pixel value that should put as object to remove singular point, it is ensured that the unification of image photometric registration.For using general camera to clap The inconsistent image of the brightness taken the photograph, colourity all can steadily and surely, correctly splice.
Detailed description of the invention
Below in conjunction with embodiment, the present invention is conducted further description:
Joining method dynamically analyzed by a kind of picture, and splicing is divided into five steps:
1, Image semantic classification
First the coloured image photographed is converted to gray level image, utilizes mean value method to estimate the luminance background of image and at ash Degree figure image subtraction is to eliminate exposure, the impact of difference in brightness.
2, image registration
(1) Harris algorithm is utilized to extract image angle point, IxAnd IyFor image I single order local derviation both horizontally and vertically, (x, is y) dimensional Gaussian smooth function to w, tries to achieve the real symmetric matrix that M is image each point, next utilizes M to calculate corresponding every The angle point receptance function R of individual pixel, solves R to the every bit of image, and it is on a certain threshold value and obtains local maximum Point is i.e. as angle point, and wherein det (M) is the determinant of matrix M, and trace (M) is the mark of matrix M, and k is constant.
M = Σ x , y w ( x , y ) I x 2 I x I y I x I y I y 2
R=detM-k (traceM)2, k=0.04-0.2;
(2) using normalization method of correlation to carry out corners Matching, wherein u is the meansigma methods of template pixel,
X, y are the point of target image template, and NCC is the cross correlation of two width images.
N C C = Σ i ( I 1 ( x i , y i ) - u 1 ) ( I 2 ( x i , y i ) - u 2 ) Σ i ( I 1 ( x i , y i ) - u 1 ) 2 Σ i ( I 2 ( x i , y i ) - u 2 ) 2 ;
(3) with RANSAC algorithm, match point is purified, to remove mispairing.
3, estimation gray level image transformation for mula
The image that camera moves shooting around approximation fixing point meets perspective transform relation, and the incidence formula between image is as follows:
X i ′ = x i ′ y i ′ w i ′ = H · X i = h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 · x i y i w i
X is original matrix, and H is transformation matrix, wherein xiFor image transverse screen amount of pixels, yiFor perpendicular screen amount of pixels, wiFor picture Element density, matrix algebra screen sampled point pixel purification value.
Least square solution H, and it is continuing with Levenberg-Marquardt algorithm H is carried out nonlinear optimization.
4, image photometric registration
Separately being considered by three color channels of red, green, blue, so in each Color Channel, different exposure parameters are from different The impact of gradation of image and color can be approximated by white balance with linear transformation:
R 2 G 2 B 2 = c r 0 0 0 c g 0 0 0 c b · R 1 G 1 B 1 + d r d g d b
(c, d) is linear transformation parameter, and the H first the second width imagery exploitation above calculated during calculating does projective transformation to first Width image aligns so that is mapped in two width image pixel point geometry, so can be obtained by two at each pixel Group RGB numerical value: (R2, G2, B2) and (R1, G1, B1).The selection principle of pixel is to try to the angle selecting above to mate Point of proximity around point and angle point.
5, image co-registration
First will according to 6 parameters (R1, R2, G1, G1, B1, B2) above calculating 9 parameter matrix H and 3 color channels Image carries out the unification on alignment geometrically and luminosity, then image carries out cylindrical surface projecting and uses linear inserting at intersection Value-based algorithm is synthesized to obtain final stitching image.
It should be appreciated that for those of ordinary skills, can be improved according to the above description or be converted, and All these modifications and variations all should belong to the protection domain of claims of the present invention.
Above patent of the present invention is carried out exemplary description, it is clear that the realization of patent of the present invention is not limited by aforesaid way System, if the various improvement that the method design that have employed patent of the present invention is carried out with technical scheme, or the most improved by the present invention Design and the technical scheme of patent directly apply to other occasion, the most within the scope of the present invention.

Claims (5)

1. joining method dynamically analyzed by a picture, it is characterised in that comprise the following steps:
S1: Image semantic classification;
S2: image registration;
S3: estimation gray level image transformation for mula;
S4: image photometric registration;
S5: image co-registration.
Joining method dynamically analyzed by picture the most according to claim 1, it is characterised in that described step S1 particularly as follows: The coloured image photographed is converted to gray level image, utilizes mean value method estimate the luminance background of image and subtract at gray level image Go to eliminate the impact of exposure, difference in brightness.
Joining method dynamically analyzed by picture the most according to claim 1, it is characterised in that described step S2 particularly as follows:
S21: utilize Harris algorithm to extract image angle point, concrete, first try to achieve the real symmetric matrix M of image each point
Recycling M calculates the angle point receptance function R of corresponding each pixel, and the every bit of image is solved R, and it is in a certain threshold value On and obtain the point of local maximum i.e. as angle point,
R=detM-k (traceM)2, k=0.04-0.2
Wherein, det (M) is the determinant of matrix M, and trace (M) is the mark of matrix M, and k is constant;
S22: use normalization method of correlation to carry out corners Matching,
S23: purify match point with RANSAC algorithm, to remove mispairing.
Joining method dynamically analyzed by picture the most according to claim 1, it is characterised in that described step S3 particularly as follows: The image that camera moves shooting around approximation fixing point meets perspective transform relation, and the incidence formula between image is as follows:
X i ′ = x i ′ y i ′ w i ′ = H · X i = h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 · x i y i w i
Wherein, X is original matrix, and H is transformation matrix, least square solution H, and is continuing with it Levenberg-Marquardt algorithm carries out nonlinear optimization, wherein x to HiFor image transverse screen amount of pixels, yiFor perpendicular screen image Element amount, wiFor picture element density, matrix algebra screen sampled point pixel purification value.
Joining method dynamically analyzed by picture the most according to claim 1, it is characterised in that described step S4 particularly as follows:
Separately being considered by three color channels of red, green, blue, in each Color Channel, different exposure parameters are white flat from different The linear transformation on the impact following formula of gradation of image and color of weighing represents:
R 2 G 2 B 2 = c r 0 0 0 c g 0 0 0 c b · R 1 G 1 B 1 + d r d g d b
In above formula, (c, is d) linear transformation parameter, and the H first the second width imagery exploitation above calculated during calculating projects Conversion is alignd to piece image so that be mapped, so at each pixel just in two width image pixel point geometry Two groups of RGB numerical value can be obtained: (R2, G2, B2) and (R1, G1, B1).
CN201610329109.9A 2016-05-18 2016-05-18 Dynamic analysis and splicing method for images Pending CN106023077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610329109.9A CN106023077A (en) 2016-05-18 2016-05-18 Dynamic analysis and splicing method for images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610329109.9A CN106023077A (en) 2016-05-18 2016-05-18 Dynamic analysis and splicing method for images

Publications (1)

Publication Number Publication Date
CN106023077A true CN106023077A (en) 2016-10-12

Family

ID=57098747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610329109.9A Pending CN106023077A (en) 2016-05-18 2016-05-18 Dynamic analysis and splicing method for images

Country Status (1)

Country Link
CN (1) CN106023077A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037134A (en) * 2020-09-10 2020-12-04 中国空气动力研究与发展中心计算空气动力研究所 Image splicing method for background homogeneous processing, storage medium and terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063583A1 (en) * 2003-09-24 2005-03-24 Lim Suk Hwan Digital picture image color conversion
CN104077764A (en) * 2014-07-11 2014-10-01 金陵科技学院 Panorama synthetic method based on image mosaic

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063583A1 (en) * 2003-09-24 2005-03-24 Lim Suk Hwan Digital picture image color conversion
CN104077764A (en) * 2014-07-11 2014-10-01 金陵科技学院 Panorama synthetic method based on image mosaic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵辉等: "基于亮度与白平衡自动调整的图像拼接算法", 《中国科技论文在线》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037134A (en) * 2020-09-10 2020-12-04 中国空气动力研究与发展中心计算空气动力研究所 Image splicing method for background homogeneous processing, storage medium and terminal
CN112037134B (en) * 2020-09-10 2023-04-21 中国空气动力研究与发展中心计算空气动力研究所 Image stitching method for background homogeneous processing, storage medium and terminal

Similar Documents

Publication Publication Date Title
US8929654B2 (en) Spectral image processing
US10013764B2 (en) Local adaptive histogram equalization
CN104680496B (en) A kind of Kinect depth map restorative procedures based on color images
CN101136192B (en) System and method for automated calibration and correction of display geometry and color
CN105931186B (en) Panoramic video splicing system and method based on automatic camera calibration and color correction
US7664315B2 (en) Integrated image processor
CN101996407B (en) Colour calibration method for multiple cameras
CN104537625A (en) Bayer color image interpolation method based on direction flag bits
CN101324749B (en) Method for performing projection display on veins plane
CN105023249A (en) Highlight image restoration method and device based on optical field
CN106485751B (en) Unmanned aerial vehicle photographic imaging and data processing method and system applied to foundation pile detection
CN103868460A (en) Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN103345755A (en) Chessboard angular point sub-pixel extraction method based on Harris operator
US10410368B1 (en) Hybrid depth processing
CN102017639B (en) Methods and apparatuses for addressing chromatic aberrations and purple fringing
CN105430376A (en) Method and device for detecting consistency of panoramic camera
CN103281513B (en) Pedestrian recognition method in the supervisory control system of a kind of zero lap territory
CN102982336A (en) Method and system for recognition model generation
CN102722868A (en) Tone mapping method for high dynamic range image
CN104392416A (en) Video stitching method for sports scene
CN104125410A (en) Panoramic video multi-lens exposure compensation method and device thereof
CN107220955A (en) A kind of brightness of image equalization methods based on overlapping region characteristic point pair
CN105359024A (en) Image pickup device and image pickup method
CN104639920A (en) Wide dynamic fusion method based on single-frame double-pulse exposure mode
US11823326B2 (en) Image processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161012

RJ01 Rejection of invention patent application after publication