CN107346536A - A kind of method and apparatus of image co-registration - Google Patents
A kind of method and apparatus of image co-registration Download PDFInfo
- Publication number
- CN107346536A CN107346536A CN201710538727.9A CN201710538727A CN107346536A CN 107346536 A CN107346536 A CN 107346536A CN 201710538727 A CN201710538727 A CN 201710538727A CN 107346536 A CN107346536 A CN 107346536A
- Authority
- CN
- China
- Prior art keywords
- point
- target
- suture
- starting point
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims abstract description 34
- 230000004927 fusion Effects 0.000 claims abstract description 20
- 108010001267 Protein Subunits Proteins 0.000 claims description 12
- 230000006870 function Effects 0.000 description 4
- 230000001427 coherent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000155 melt Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of method and apparatus of image co-registration, calculates the overlapping region of frame of video in two-path video, using frame difference method, calculates all moving regions corresponding to overlapping region, and calculate the amount of exercise of moving region.According to overlapping region and moving region, it may be determined that go out the starting point of every suture;By calculating the gray difference matrix of overlapping region, and according to starting point and amount of exercise, the target point for meeting preset rules can be selected;According to the starting point and target point, suture corresponding to each starting point is determined.A suture for meeting preparatory condition is selected from these sutures as target suture, according to the target suture, it is possible to achieve the fusion of frame of video.It can be seen that it is determined that moving region in effectively avoiding frame of video during target suture so that the image merged according to the suture, can effectively avoid occurring phenomena such as overlapping, dislocation, " ghost " in spliced map.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of method and apparatus of image co-registration.
Background technology
A branch of the image co-registration as information fusion, it is a focus in current information control fusion.Image melts
The data mode of conjunction is the image for including light and shade, color, temperature, distance and other scene features.These images can be with
Provided in the form of a width, or a row.And image co-registration is that the image information of 2 or more than 2 is fused into 1 image
On so that the image of fusion contains more information, can be more convenient people to observe or computer disposal.The mesh of image co-registration
Mark is the uncertainty and redundancy that will reduce output on the basis of the maximum merging of relevant information under practical application target.Image
The advantages of fusion, it is obvious that its time and space information of energy expanded view as contained by, reduces uncertainty, increases reliability, change
Enter the robust performance of system.
Image co-registration refers to integrating the information of plurality of input images, to obtain the process of higher-quality output image.
During IMAQ is carried out, due to the difference of the illumination of the scene of shooting, angle etc., the weight between image to be fused
Folded region can have bigger difference.
In traditional approach, image co-registration is directly carried out according to the projective transformation matrix that registration is tried to achieve, often leads to finally melt
Image has obvious splicing seams after conjunction.When there is moving object to appear in overlapping region, occur in spliced map overlapping, wrong
Position, " ghost " etc..
It is those skilled in the art's urgent problem to be solved it can be seen that how to realize the accurate fusion of image.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of method and apparatus of image co-registration, it is possible to achieve the accurate of image is melted
Close.
In order to solve the above technical problems, the embodiment of the present invention provides a kind of method of image co-registration, including:
S10:Calculate the overlapping region of frame of video in two-path video;
S11:Using frame difference method, all moving regions corresponding to the overlapping region are calculated;And according to the moving region
Width value, height value and the number for moving pixel, calculate the amount of exercise of the moving region;
S12:According to the overlapping region and the moving region, the starting point of every suture is determined;
S13:Calculate the gray difference matrix of the overlapping region;
S14:According to the starting point, the amount of exercise and the gray difference matrix, select and meet preset rules
Target point;And according to the starting point and the target point, determine each self-corresponding suture of all starting points;
S15:A suture for meeting preparatory condition is selected from the suture as target suture;And foundation
The target suture, realizes the fusion of the frame of video.
Optionally, include in the S11:
According to formula
The amount of exercise ρ of the moving region is calculated,
Wherein, move_pix represents to move the number of pixel in moving region;Rect_width represents the width of moving region
Angle value;Rect_height represents the height value of moving region.
Optionally, include in the S12:
According to the height value of the overlapping region, all pixels corresponding to height value described in the overlapping region are chosen
Point;
The pixel for being not belonging to the moving region is selected from the pixel as starting point.
Optionally, include in the S14:
S140:Judge whether target neighborhood point meets the first preparatory condition corresponding to the starting point;If so, then perform
S141;If it is not, then perform S145;
S141:According to formula Δ step=rect_width* ρ, step value Δ step is calculated;
S142:Judge whether the target neighborhood point crosses the border;If so, then perform S143;If it is not, then perform S144;
S143:According to the gray difference matrix, select that to meet second pre- from neighborhood point corresponding to the starting point
If the neighborhood point of condition is as target point;
S144:According to the step value and the gray difference matrix, target point is determined;
S145:Using the target neighborhood point as target point,
S146:Judge whether the target point reaches the 3rd preparatory condition;
If so, then end operation;If it is not, then perform S147;
S147:Using the target point as starting point, and return to the S140.
Optionally, in addition to:
Judge whether the frame of video of the two-path video merges to finish;
If it is not, then return to the S11.
The embodiment of the present invention additionally provides a kind of device of image co-registration, including computing unit, determining unit and selection list
Member,
The computing unit, for calculating the overlapping region of two-path video;
The computing unit is additionally operable to utilize frame difference method, calculates all moving regions corresponding to the overlapping region;And according to
According to the width value of the moving region, height value and the number for moving pixel, the amount of exercise of the moving region is calculated;
The determining unit, for according to the overlapping region and the moving region, determining rising for every suture
Initial point;
The computing unit is additionally operable to calculate the gray difference matrix of the overlapping region;
The selection unit, for according to the starting point, the amount of exercise and the gray difference matrix, selecting symbol
Close the target point of preset rules;And according to the starting point and the target point, determine that all starting points each correspond to
Suture;
The selection unit is additionally operable to select a suture for meeting preparatory condition from the suture as mesh
Mark suture;And according to the target suture, realize the fusion of the frame of video.
Optionally, the computing unit is specifically used for according to formula
The amount of exercise ρ of the moving region is calculated,
Wherein, move_pix represents to move the number of pixel in moving region;Rect_width represents the width of moving region
Angle value;Rect_height represents the height value of moving region.
Optionally, the determining unit is specifically used for the height value according to the overlapping region, chooses the overlapping region
Described in all pixels point corresponding to height value;And the pixel for being not belonging to the moving region is selected from the pixel
As starting point.
Optionally, the selection unit includes the first judgment sub-unit, computation subunit, chooses subelement, determines that son is single
Member, as subelement and the second judgment sub-unit,
First judgment sub-unit, for judging whether target neighborhood point corresponding to the starting point meets that first is default
Condition;If so, then trigger the computation subunit;Described subelement is used as if it is not, then triggering;
The computation subunit, according to formula Δ step=rect_width* ρ, calculate step value Δ step;
First judgment sub-unit is additionally operable to judge whether the target neighborhood point crosses the border;If so, then trigger the choosing
Take subelement;If it is not, then trigger the determination subelement;
The selection subelement, for according to the gray difference matrix, being selected from neighborhood point corresponding to the starting point
The neighborhood point for meeting the second preparatory condition is taken out as target point;
The determination subelement, for according to the step value and the gray difference matrix, determining target point;
It is described to be used as subelement, for using the target neighborhood point as target point,
Second judgment sub-unit, for judging whether the target point reaches the 3rd preparatory condition;
If so, then end operation;If it is not, then using the target point as starting point, and return to described first and judge that son is single
Member.
Optionally, in addition to judging unit,
The judging unit, whether the frame of video for judging the two-path video, which merges, finishes;
If it is not, then trigger the computing unit.
By calculating the overlapping region of frame of video in two-path video it can be seen from above-mentioned technical proposal, recycle frame poor
Method, all moving regions corresponding to the overlapping region can be calculated., can be with according to described in after moving region is determined
Width value, height value and the number for moving pixel of moving region, calculate the amount of exercise of the moving region.According to described in
Overlapping region and the moving region, it may be determined that go out the starting point of every suture;By the ash for calculating the overlapping region
Difference matrix is spent, and according to the starting point and the amount of exercise, the target point for meeting preset rules can be selected;According to this
Starting point and target point, determine each self-corresponding suture of each starting point;Selected from these sutures meet it is default
One suture of condition, that is, an optimal suture is selected as target suture, can be with according to the target suture
Realize the fusion of the frame of video.It can be seen that it is determined that moving region in effectively avoiding frame of video during target suture, makes
The image merged according to the suture is obtained, can effectively avoid occurring phenomena such as overlapping, dislocation, " ghost " in spliced map,
Improve the accuracy of image mosaic.
Brief description of the drawings
In order to illustrate the embodiments of the present invention more clearly, the required accompanying drawing used in embodiment will be done simply below
Introduce, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for ordinary skill people
For member, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of flow chart of the method for image co-registration provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of method for choosing target point provided in an embodiment of the present invention;
Fig. 3 a are the schematic diagram of a two field picture of left side camera provided in an embodiment of the present invention shooting;
Fig. 3 b are the schematic diagram of a two field picture of right camera provided in an embodiment of the present invention shooting;
Fig. 3 c are showing for the image provided in an embodiment of the present invention for being merged Fig. 3 a and Fig. 3 b according to target suture
It is intended to;
Fig. 4 is a kind of structural representation of the device of image co-registration provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.Based on this
Embodiment in invention, for those of ordinary skill in the art under the premise of creative work is not made, what is obtained is every other
Embodiment, belong to the scope of the present invention.
In order that those skilled in the art more fully understand the present invention program, with reference to the accompanying drawings and detailed description
The present invention is described in further detail.
Next, a kind of method for image co-registration that the embodiment of the present invention is provided is discussed in detail.Fig. 1 is implemented for the present invention
A kind of flow chart of the method for image co-registration that example provides, this method include:
S10:Calculate the overlapping region of frame of video in two-path video.
Feature point detection, registration can be carried out to this two-path video in embodiments of the present invention, solved in two-path video
The overlapping region (ROI) of frame of video.
S11:Using frame difference method, all moving regions corresponding to the overlapping region are calculated;And according to the moving region
Width value, height value and the number for moving pixel, calculate the amount of exercise of the moving region.
After motor area corresponding to overlapping region is calculated, can get the width value of moving region, height value and
Move the number of pixel.
In embodiments of the present invention, the moving region calculated can be set to moveArea [k] (wherein, k=0,1,
2 ... n-1), wherein n represents the number of the moving region calculated.MoveArea data structure is:
Wherein, pix_x represents the coordinate in the x directions of the pixel in the moving region upper left corner;Pix_y represents the moving region
The coordinate in the pixel y directions in the upper left corner;What rect_width was represented is the width value of the moving region, and rect_height is represented
Be the moving region height value.Above-mentioned all variables are in units of pixel.
In embodiments of the present invention, the amount of exercise of moving region can be calculated according to equation below, can be represented to transport with ρ
Momentum.
Wherein, move_pix is the number that pixel is moved in moving region.
Target suture is chosen for the ease of follow-up, moving region information structure can be filled in the moving region
moveAreaInfo.Moving region information structure is:
Wherein, move_area refers to moveArea structures, and move_ratio is the size of the amount of exercise calculated.
S12:According to the overlapping region and the moving region, the starting point of every suture is determined.
In embodiments of the present invention, can be chosen according to the height value of the overlapping region described in the overlapping region
All pixels point corresponding to height value.
For example, all pixels point selected from overlapping region is p (j, height) (j=0,1,2 ... width-1)
Wherein, width is width values of the ROI in units of pixel, and height is the height value of ROI region.
By traveling through all moving region moveArea [k], the pixel for being not belonging to the moving region can be selected
Point, and using these pixels as starting point.
Specifically, can be chosen according to equation below,
p(j,height)∈moveArea(k)
, then can be with when pixel is unsatisfactory for above-mentioned formula when pixel meets that above-mentioned formula then cannot function as starting point
As starting point.
S13:Calculate the gray difference matrix of the overlapping region.
Need to be dependent on the gray difference matrix of overlapping region when subsequently choosing target point.For the ease of subsequently can be fast
Speed selects target point, can first calculate the gray difference matrix M of overlapping region.
S14:According to the starting point, the amount of exercise and the gray difference matrix, select and meet preset rules
Target point;And according to the starting point and the target point, determine each self-corresponding suture of all starting points.
From S12 introduction, the value of pixel ordinate corresponding to starting point is the height value of ROI region, is being chosen
During target point, the form that the ordinate of starting point successively decreases can be selected into target point using starting point as foundation.
Next the concrete operations expansion that target point by taking a starting point as an example, will be chosen to it is introduced.As shown in Fig. 2
The step includes:
S140:Judge whether target neighborhood point meets the first preparatory condition corresponding to the starting point;If so, then perform
S141;If it is not, then perform S145.
Neighborhood point can be the pixel adjacent with the starting point, due to the height that the ordinate of starting point is ROI region
Value, so its adjacent pixel has 5.
In embodiments of the present invention, the width value of pixel can be used as abscissa, the height value of pixel is sat to be vertical
Mark.Target neighborhood point can be that width value is identical with starting point, and height value is less than a pixel of starting point.
In embodiments of the present invention, p (j, i) can be used to represent starting point, then the starting neighborhood of a point point is respectively p (j-
1, i), p (j+1, i), p (j, i-1), p (j-1, i-1), p (j+1, i-1), wherein, p (j, i-1) is starting point p (j, i) mesh
Mark neighborhood point.
First preparatory condition can judge whether starting point needs the condition moved horizontally.
In embodiments of the present invention, can be judged according to equation below,
p(j,i-1)∈moveArea(k)
When choosing target point first, i=height.When target neighborhood point p (j, i-1) belongs to moving region moveArea
(k) during pixel, then illustrate that needs are moved horizontally, that is, perform S141;When target neighborhood point p (j, i-1) is not belonging to transport
During dynamic region moveArea (k) pixel, then explanation need not be moved horizontally, that is, perform S145.
S141:According to formula Δ step=rect_width* ρ, step value Δ step is calculated.
The width value rect_width of moving region has been got in S11, and has calculated moving region
Amount of exercise ρ,
In embodiments of the present invention, can according to equation below material calculation value,
Δ step=rect_width* ρ
Wherein, Δ step represents step value.
S142:Judge whether the target neighborhood point crosses the border;If so, then perform S143;If it is not, then perform S144.
In embodiments of the present invention, can according to following two formula as judge whether target neighborhood point cross the border according to
According to,
(j+ Δ step) > ROI_WIDTH (1)
(j- Δ step) < 0 (2)
When meeting formula (1) or formula (2), then illustrate that target neighborhood point crosses the border, then perform S143;When being both unsatisfactory for
Formula (1) is unsatisfactory for formula (2) again when, then illustrates that target neighborhood point does not cross the border, then perform S144.
S143:According to the gray difference matrix, select that to meet second pre- from neighborhood point corresponding to the starting point
If the neighborhood point of condition is as target point.
Second preparatory condition can select consecutive points pixel corresponding to minimum value in gray difference matrix.
Specifically, can be by M (j-1, i) in gray difference matrix, M (j+1, i), M (j, i-1), M (j-1, i-1), M (j+
1, i-1) pixel corresponding to minimum value is target point as next Searching point.
S144:According to the step value and the gray difference matrix, target point is determined.
Specifically, the minimum value institute in M (i, j- Δ step), M (i, j+ Δ step) can be taken in gray difference matrix
Corresponding point is target point for the Searching point of next step.
S145:Using the target neighborhood point as target point.
S146:Judge whether the target point reaches the 3rd preparatory condition.
3rd it is default can be the condition for judging whether to terminate above-mentioned S140-S145 operations.
In original state, the height value i=height of starting point, a target point is often chosen, i value can reduce 1,
3rd preparatory condition can judge that whether the height value of target point is 0, that is, judges whether to meet i=0.
When meeting three preparatory conditions, then illustrate to have selected all target points corresponding to the starting point, then
Above-mentioned S140-S145 operating process can be terminated.
When being unsatisfactory for three preparatory conditions, then illustrate not selected all targets corresponding to the starting point also
Point, then it can perform S147.
S147:Using the target point as starting point, and return to the S140.
After all target points corresponding to the starting point have been selected, connected by starting point target point corresponding with its
Line can draw a suture.
The introduction deployed in above-mentioned introduction by taking a starting point as an example, with reference to aforesaid way, residue can be selected successively
Each self-corresponding target point of other starting points, so that it is determined that going out each self-corresponding suture of all starting points.
S15:A suture for meeting preparatory condition is selected from the suture as target suture;And foundation
The target suture, realizes the fusion of the frame of video.
Starting point and target point can regard pixel as.When the number for the pixel that a suture includes is fewer,
Then illustrate that this bar suture performance is better, namely it is more coherent, natural according to the image that the suture is spliced into.
Preparatory condition can be that the foundation of optimal suture is chosen from plurality of suture.In embodiments of the present invention, in advance
If condition can be chosen comprising the minimum suture of pixel number, the suture is target suture.
Referring to Fig. 3 a and Fig. 3 b, by taking two video cameras that left and right horizontal is set as an example, Fig. 3 a are left side camera shooting
Piece image, Fig. 3 b are the piece image of right camera shooting, it can be seen that including in this two images
Pedestrian, the region where pedestrian belong to moving region.According to method provided in an embodiment of the present invention, this two width figure can be selected
One target suture of picture, as shown in Figure 3 c for according to this bar suture merge after image, can be seen that this from Fig. 3 c
Suture effectively avoids the moving region in image so that the image after fusion is more coherent, natural.
By calculating the overlapping region of frame of video in two-path video it can be seen from above-mentioned technical proposal, recycle frame poor
Method, all moving regions corresponding to the overlapping region can be calculated., can be with according to described in after moving region is determined
Width value, height value and the number for moving pixel of moving region, calculate the amount of exercise of the moving region.According to described in
Overlapping region and the moving region, it may be determined that go out the starting point of every suture;By the ash for calculating the overlapping region
Difference matrix is spent, and according to the starting point and the amount of exercise, the target point for meeting preset rules can be selected;According to this
Starting point and target point, determine each self-corresponding suture of each starting point;Selected from these sutures meet it is default
One suture of condition, that is, an optimal suture is selected as target suture, can be with according to the target suture
Realize the fusion of the frame of video.It can be seen that it is determined that moving region in effectively avoiding frame of video during target suture, makes
The image merged according to the suture is obtained, can effectively avoid occurring phenomena such as overlapping, dislocation, " ghost " in spliced map,
Improve the accuracy of image mosaic.
The introduction deployed in above-mentioned introduction by taking the fusion of two frame of video as an example, with reference to aforesaid way, it can complete to two
The fusion of all frame of video in the video of road.In embodiments of the present invention, can be by judging the frame of video of the two-path video
No fusion finishes, to terminate aforesaid operations.When two-path video, which does not merge, to be finished, then the S11 can be returned to, repeat above-mentioned behaviour
Make, until completing the fusion of all frame of video in two-path video.
Fig. 4 is a kind of structural representation of the device of image co-registration provided in an embodiment of the present invention, including computing unit 41,
Determining unit 42 and selection unit 43,
The computing unit 41, for calculating the overlapping region of two-path video;
The computing unit 41 is additionally operable to utilize frame difference method, calculates all moving regions corresponding to the overlapping region;And
According to the width value of the moving region, height value and the number for moving pixel, the amount of exercise of the moving region is calculated;
The determining unit 42, for according to the overlapping region and the moving region, determining every suture
Starting point;
The computing unit 41 is additionally operable to calculate the gray difference matrix of the overlapping region;
The selection unit 43, for according to the starting point, the amount of exercise and the gray difference matrix, selecting
Meet the target point of preset rules;And according to the starting point and the target point, determine that all starting points are each right
The suture answered;
The selection unit 43 is additionally operable to select a suture conduct for meeting preparatory condition from the suture
Target suture;And according to the target suture, realize the fusion of the frame of video.
Optionally, the computing unit is specifically used for according to formula
The amount of exercise ρ of the moving region is calculated,
Wherein, move_pix represents to move the number of pixel in moving region;Rect_width represents the width of moving region
Angle value;Rect_height represents the height value of moving region.
Optionally, the determining unit is specifically used for the height value according to the overlapping region, chooses the overlapping region
Described in all pixels point corresponding to height value;And the pixel for being not belonging to the moving region is selected from the pixel
As starting point.
Optionally, the selection unit includes the first judgment sub-unit, computation subunit, chooses subelement, determines that son is single
Member, as subelement and the second judgment sub-unit,
First judgment sub-unit, for judging whether target neighborhood point corresponding to the starting point meets that first is default
Condition;If so, then trigger the computation subunit;Described subelement is used as if it is not, then triggering;
The computation subunit, according to formula Δ step=rect_width* ρ, calculate step value Δ step;
First judgment sub-unit is additionally operable to judge whether the target neighborhood point crosses the border;If so, then trigger the choosing
Take subelement;If it is not, then trigger the determination subelement;
The selection subelement, for according to the gray difference matrix, being selected from neighborhood point corresponding to the starting point
The neighborhood point for meeting the second preparatory condition is taken out as target point;
The determination subelement, for according to the step value and the gray difference matrix, determining target point;
It is described to be used as subelement, for using the target neighborhood point as target point,
Second judgment sub-unit, for judging whether the target point reaches the 3rd preparatory condition;
If so, then end operation;If it is not, then using the target point as starting point, and return to described first and judge that son is single
Member.
Optionally, in addition to judging unit,
The judging unit, whether the frame of video for judging the two-path video, which merges, finishes;
If it is not, then trigger the computing unit.
The explanation of feature may refer to the related description of embodiment corresponding to Fig. 1 and Fig. 2 in embodiment corresponding to Fig. 4, this
In no longer repeat one by one.
By calculating the overlapping region of frame of video in two-path video it can be seen from above-mentioned technical proposal, recycle frame poor
Method, all moving regions corresponding to the overlapping region can be calculated., can be with according to described in after moving region is determined
Width value, height value and the number for moving pixel of moving region, calculate the amount of exercise of the moving region.According to described in
Overlapping region and the moving region, it may be determined that go out the starting point of every suture;By the ash for calculating the overlapping region
Difference matrix is spent, and according to the starting point and the amount of exercise, the target point for meeting preset rules can be selected;According to this
Starting point and target point, determine each self-corresponding suture of each starting point;Selected from these sutures meet it is default
One suture of condition, that is, an optimal suture is selected as target suture, can be with according to the target suture
Realize the fusion of the frame of video.It can be seen that it is determined that moving region in effectively avoiding frame of video during target suture, makes
The image merged according to the suture is obtained, can effectively avoid occurring phenomena such as overlapping, dislocation, " ghost " in spliced map,
Improve the accuracy of image mosaic.
A kind of method and apparatus of the image co-registration provided above the embodiment of the present invention is described in detail.Explanation
Each embodiment is described by the way of progressive in book, what each embodiment stressed be it is different from other embodiment it
Place, between each embodiment identical similar portion mutually referring to.For device disclosed in embodiment, due to itself and reality
Apply that method disclosed in example is corresponding, so description is fairly simple, related part is referring to method part illustration.It should refer to
Go out, for those skilled in the art, under the premise without departing from the principles of the invention, can also be to the present invention
Some improvement and modification are carried out, these are improved and modification is also fallen into the protection domain of the claims in the present invention.
Professional further appreciates that, with reference to the unit of each example of the embodiments described herein description
And algorithm steps, can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware and
The interchangeability of software, the composition and step of each example are generally described according to function in the above description.These
Function is performed with hardware or software mode actually, application-specific and design constraint depending on technical scheme.Specialty
Technical staff can realize described function using distinct methods to each specific application, but this realization should not
Think beyond the scope of this invention.
Directly it can be held with reference to the step of method or algorithm that the embodiments described herein describes with hardware, processor
Capable software module, or the two combination are implemented.Software module can be placed in random access memory (RAM), internal memory, read-only deposit
Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technology
In any other form of storage medium well known in field.
Claims (10)
- A kind of 1. method of image co-registration, it is characterised in that including:S10:Calculate the overlapping region of frame of video in two-path video;S11:Using frame difference method, all moving regions corresponding to the overlapping region are calculated;And according to the width of the moving region Angle value, height value and the number for moving pixel, calculate the amount of exercise of the moving region;S12:According to the overlapping region and the moving region, the starting point of every suture is determined;S13:Calculate the gray difference matrix of the overlapping region;S14:According to the starting point, the amount of exercise and the gray difference matrix, the target for meeting preset rules is selected Point;And according to the starting point and the target point, determine each self-corresponding suture of all starting points;S15:A suture for meeting preparatory condition is selected from the suture as target suture;And according to described Target suture, realize the fusion of the frame of video.
- 2. according to the method for claim 1, it is characterised in that include in the S11:According to formula<mrow> <mi>&rho;</mi> <mo>=</mo> <mfrac> <mrow> <mi>m</mi> <mi>o</mi> <mi>v</mi> <mi>e</mi> <mo>_</mo> <mi>p</mi> <mi>i</mi> <mi>x</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mi>t</mi> <mo>_</mo> <mi>w</mi> <mi>i</mi> <mi>d</mi> <mi>t</mi> <mi>h</mi> <mo>&times;</mo> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mi>t</mi> <mo>_</mo> <mi>h</mi> <mi>e</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </mfrac> </mrow>The amount of exercise ρ of the moving region is calculated,Wherein, move_pix represents to move the number of pixel in moving region;Rect_width represents the width value of moving region; Rect_height represents the height value of moving region.
- 3. according to the method for claim 2, it is characterised in that include in the S12:According to the height value of the overlapping region, all pixels point corresponding to height value described in the overlapping region is chosen;The pixel for being not belonging to the moving region is selected from the pixel as starting point.
- 4. according to the method for claim 3, it is characterised in that include in the S14:S140:Judge whether target neighborhood point meets the first preparatory condition corresponding to the starting point;If so, then perform S141; If it is not, then perform S145;S141:According to formula Δ step=rect_witdh* ρ, step value Δ step is calculated;S142:Judge whether the target neighborhood point crosses the border;If so, then perform S143;If it is not, then perform S144;S143:According to the gray difference matrix, selected from neighborhood point corresponding to the starting point and meet the second default bar The neighborhood point of part is as target point;S144:According to the step value and the gray difference matrix, target point is determined;S145:Using the target neighborhood point as target point,S146:Judge whether the target point reaches the 3rd preparatory condition;If so, then end operation;If it is not, then perform S147;S147:Using the target point as starting point, and return to the S140.
- 5. according to the method described in claim 1-4 any one, it is characterised in that also include:Judge whether the frame of video of the two-path video merges to finish;If it is not, then return to the S11.
- A kind of 6. device of image co-registration, it is characterised in that including computing unit, determining unit and unit is chosen,The computing unit, for calculating the overlapping region of two-path video;The computing unit is additionally operable to utilize frame difference method, calculates all moving regions corresponding to the overlapping region;And according to institute The width value, height value and the number for moving pixel of moving region are stated, calculates the amount of exercise of the moving region;The determining unit, for according to the overlapping region and the moving region, determining the starting point of every suture;The computing unit is additionally operable to calculate the gray difference matrix of the overlapping region;The selection unit, for according to the starting point, the amount of exercise and the gray difference matrix, select meet it is pre- If the target point of rule;And according to the starting point and the target point, determine each self-corresponding seam of all starting points Zygonema;The selection unit is additionally operable to select a suture for meeting preparatory condition from the suture as target seam Zygonema;And according to the target suture, realize the fusion of the frame of video.
- 7. device according to claim 6, it is characterised in that the computing unit is specifically used for according to formula<mrow> <mi>&rho;</mi> <mo>=</mo> <mfrac> <mrow> <mi>m</mi> <mi>o</mi> <mi>v</mi> <mi>e</mi> <mo>_</mo> <mi>p</mi> <mi>i</mi> <mi>x</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mi>t</mi> <mo>_</mo> <mi>w</mi> <mi>i</mi> <mi>d</mi> <mi>t</mi> <mi>h</mi> <mo>&times;</mo> <mi>r</mi> <mi>e</mi> <mi>c</mi> <mi>t</mi> <mo>_</mo> <mi>h</mi> <mi>e</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </mfrac> </mrow>The amount of exercise ρ of the moving region is calculated,Wherein, move_pix represents to move the number of pixel in moving region;Rect_width represents the width value of moving region; Rect_height represents the height value of moving region.
- 8. device according to claim 7, it is characterised in that the determining unit is specifically used for according to the overlapping region Height value, choose all pixels point corresponding to height value described in the overlapping region;And selected from the pixel The pixel of the moving region is not belonging to as starting point.
- 9. device according to claim 8, it is characterised in that the selection unit includes the first judgment sub-unit, calculated Subelement, choose subelement, determination subelement, as subelement and the second judgment sub-unit,First judgment sub-unit, for judging whether target neighborhood point corresponding to the starting point meets the first default bar Part;If so, then trigger the computation subunit;Described subelement is used as if it is not, then triggering;The computation subunit, according to formula Δ step=rect_wtihd* ρ, calculate step value Δ step;First judgment sub-unit is additionally operable to judge whether the target neighborhood point crosses the border;If so, then trigger selection Unit;If it is not, then trigger the determination subelement;The selection subelement, for according to the gray difference matrix, being selected from neighborhood point corresponding to the starting point Meet the neighborhood point of the second preparatory condition as target point;The determination subelement, for according to the step value and the gray difference matrix, determining target point;It is described to be used as subelement, for using the target neighborhood point as target point,Second judgment sub-unit, for judging whether the target point reaches the 3rd preparatory condition;If so, then end operation;If it is not, then using the target point as starting point, and return to first judgment sub-unit.
- 10. according to the method described in claim 6-9 any one, it is characterised in that also including judging unit,The judging unit, whether the frame of video for judging the two-path video, which merges, finishes;If it is not, then trigger the computing unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710538727.9A CN107346536B (en) | 2017-07-04 | 2017-07-04 | Image fusion method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710538727.9A CN107346536B (en) | 2017-07-04 | 2017-07-04 | Image fusion method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107346536A true CN107346536A (en) | 2017-11-14 |
CN107346536B CN107346536B (en) | 2020-08-11 |
Family
ID=60256865
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710538727.9A Active CN107346536B (en) | 2017-07-04 | 2017-07-04 | Image fusion method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107346536B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583357A (en) * | 2020-05-20 | 2020-08-25 | 重庆工程学院 | Object motion image capturing and synthesizing method based on MATLAB system |
CN113344787A (en) * | 2021-06-11 | 2021-09-03 | 北京中交华安科技有限公司 | Optimal suture line automatic adjustment algorithm, traffic early warning method and system |
US20220147752A1 (en) * | 2020-11-06 | 2022-05-12 | Sigmastar Technology Ltd. | Image stitching apparatus, image processing chip and image stitching method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101414379A (en) * | 2007-10-17 | 2009-04-22 | 日电(中国)有限公司 | Apparatus and method for generating panorama image |
CN101859433A (en) * | 2009-04-10 | 2010-10-13 | 日电(中国)有限公司 | Image mosaic device and method |
WO2014054958A3 (en) * | 2012-10-05 | 2014-07-24 | Universidade De Coimbra | Method for aligning and tracking point regions in images with radial distortion that outputs motion model parameters, distortion calibration, and variation in zoom |
US20140363087A1 (en) * | 2013-06-06 | 2014-12-11 | Apple Inc. | Methods of Image Fusion for Image Stabilization |
US20160323557A1 (en) * | 2015-04-30 | 2016-11-03 | Thomson Licensing | Method for obtaining light-field data using a non-light-field imaging device, corresponding device, computer program product and non-transitory computer-readable carrier medium |
-
2017
- 2017-07-04 CN CN201710538727.9A patent/CN107346536B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101414379A (en) * | 2007-10-17 | 2009-04-22 | 日电(中国)有限公司 | Apparatus and method for generating panorama image |
CN101859433A (en) * | 2009-04-10 | 2010-10-13 | 日电(中国)有限公司 | Image mosaic device and method |
WO2014054958A3 (en) * | 2012-10-05 | 2014-07-24 | Universidade De Coimbra | Method for aligning and tracking point regions in images with radial distortion that outputs motion model parameters, distortion calibration, and variation in zoom |
US20140363087A1 (en) * | 2013-06-06 | 2014-12-11 | Apple Inc. | Methods of Image Fusion for Image Stabilization |
US20160323557A1 (en) * | 2015-04-30 | 2016-11-03 | Thomson Licensing | Method for obtaining light-field data using a non-light-field imaging device, corresponding device, computer program product and non-transitory computer-readable carrier medium |
Non-Patent Citations (1)
Title |
---|
谷雨 等: "结合最佳缝合线和多分辨率融合的图像拼接", 《中国图像图形学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583357A (en) * | 2020-05-20 | 2020-08-25 | 重庆工程学院 | Object motion image capturing and synthesizing method based on MATLAB system |
US20220147752A1 (en) * | 2020-11-06 | 2022-05-12 | Sigmastar Technology Ltd. | Image stitching apparatus, image processing chip and image stitching method |
CN113344787A (en) * | 2021-06-11 | 2021-09-03 | 北京中交华安科技有限公司 | Optimal suture line automatic adjustment algorithm, traffic early warning method and system |
CN113344787B (en) * | 2021-06-11 | 2022-02-01 | 北京中交华安科技有限公司 | Optimal suture line automatic adjustment algorithm, traffic early warning method and system |
Also Published As
Publication number | Publication date |
---|---|
CN107346536B (en) | 2020-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6730690B2 (en) | Dynamic generation of scene images based on the removal of unwanted objects present in the scene | |
JP5383576B2 (en) | Imaging apparatus, imaging method, and program | |
CN101228477B (en) | Real-time preview system and method for panoramic images | |
CN104992408B (en) | For the panorama image generation method and device of user terminal | |
US10909703B2 (en) | Image processing method, electronic device and computer-readable storage medium | |
CN103384998B (en) | Imaging device and imaging method | |
CN109948398B (en) | Image processing method for panoramic parking and panoramic parking device | |
TWI548276B (en) | Method and computer-readable media for visualizing video within existing still images | |
CN107087107A (en) | Image processing apparatus and method based on dual camera | |
CN107346536A (en) | A kind of method and apparatus of image co-registration | |
CN111462503B (en) | Vehicle speed measuring method and device and computer readable storage medium | |
CN106709878B (en) | A kind of rapid image fusion method | |
JP2005100407A (en) | System and method for creating panorama image from two or more source images | |
CN107085842A (en) | The real-time antidote and system of self study multiway images fusion | |
CN103067656B (en) | Camera head and image capture method | |
CN102209197A (en) | Imaging apparatus and imaging method | |
CN107135330A (en) | A kind of method and apparatus of video frame synchronization | |
CN106856000A (en) | A kind of vehicle-mounted panoramic image seamless splicing processing method and system | |
CN110245199B (en) | Method for fusing large-dip-angle video and 2D map | |
CN107038686A (en) | A kind of method and apparatus of image mosaic processing | |
JP6110780B2 (en) | Additional information display system | |
JP5268025B2 (en) | Imaging device | |
JPH10126665A (en) | Image composing device | |
JP2007174301A (en) | Image photographing apparatus | |
KR20020078663A (en) | Patched Image Alignment Method and Apparatus In Digital Mosaic Image Construction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |