CN107346536B - Image fusion method and device - Google Patents
Image fusion method and device Download PDFInfo
- Publication number
- CN107346536B CN107346536B CN201710538727.9A CN201710538727A CN107346536B CN 107346536 B CN107346536 B CN 107346536B CN 201710538727 A CN201710538727 A CN 201710538727A CN 107346536 B CN107346536 B CN 107346536B
- Authority
- CN
- China
- Prior art keywords
- target
- motion
- point
- area
- subunit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title description 6
- 238000000034 method Methods 0.000 claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims abstract description 34
- 230000004927 fusion Effects 0.000 claims abstract description 33
- 230000008569 process Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000001960 triggered effect Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 4
- 230000001427 coherent effect Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a method and a device for image fusion, which are used for calculating the overlapping area of video frames in two paths of videos, calculating all motion areas corresponding to the overlapping area by using a frame difference method and calculating the motion amount of the motion areas. According to the overlapping area and the motion area, the starting point of each suture line can be determined; by calculating the gray difference matrix of the overlapped area and according to the starting point and the amount of exercise, a target point which accords with a preset rule can be selected; and determining the suture line corresponding to each starting point according to the starting point and the target point. One suture line meeting the preset condition is selected from the suture lines to be used as a target suture line, and the fusion of the video frames can be realized according to the target suture line. Therefore, the motion area in the video frame is effectively avoided when the target suture line is determined, so that the phenomena of overlapping, dislocation, ghost and the like in the spliced image can be effectively avoided according to the image fused by the suture line.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for image fusion.
Background
Image fusion, as a branch of information fusion, is a hot spot in current information fusion research. The image fusion data is in the form of an image containing light and shade, color, temperature, distance, and other scene features. These images may be presented in one frame, or in a column. And the image fusion is to fuse 2 or more than 2 pieces of image information onto 1 piece of image, so that the fused image contains more information and can be observed by a person or processed by a computer more conveniently. The aim of image fusion is to reduce the uncertainty and redundancy of output on the basis of maximum combination of related information under the actual application aim. The image fusion has obvious advantages of enlarging the time space information contained in the image, reducing the uncertainty, increasing the reliability and improving the robust performance of the system.
Image fusion refers to a process of integrating information of a plurality of input images to obtain an output image of higher quality. In the process of image acquisition, due to the difference of illumination, angle and the like of the shot scene, the overlapped area between the images to be fused has a relatively large difference.
In the traditional mode, image fusion is directly carried out according to a projection transformation matrix obtained by registration, so that the final fused image often has obvious splicing seams. When a moving object appears in the overlapping region, overlapping, dislocation, "ghost" and the like appear in the mosaic.
Therefore, how to realize accurate fusion of images is a problem to be solved urgently by the technical personnel in the field.
Disclosure of Invention
The embodiment of the invention aims to provide an image fusion method and device, which can realize accurate fusion of images.
To solve the above technical problem, an embodiment of the present invention provides an image fusion method, including:
s10: calculating the overlapping area of the video frames in the two paths of videos;
s11: calculating all motion areas corresponding to the overlapping areas by using a frame difference method; calculating the amount of motion of the motion area according to the width value and the height value of the motion area and the number of the motion pixels;
s12: determining a starting point of each suture line according to the overlapping area and the motion area;
s13: calculating a gray difference matrix of the overlapping area;
s14: selecting a target point which accords with a preset rule according to the starting point, the motion amount and the gray difference matrix; determining the suture lines corresponding to all the starting points according to the starting points and the target points;
s15: selecting one suture line meeting preset conditions from the suture lines as a target suture line; and realizing the fusion of the video frames according to the target suture line.
Optionally, in the S11, the method includes:
according to the formula
The amount of movement p of the area of movement is calculated,
wherein, move _ pix represents the number of moving pixels in the moving area; rect _ width represents the width value of the motion area; rect _ height represents the height value of the motion area.
Optionally, in the S12, including:
selecting all pixel points corresponding to the height values in the overlapping area according to the height values of the overlapping area;
and selecting the pixel points which do not belong to the motion area from the pixel points as starting points.
Optionally, in the S14, the method includes:
s140: judging whether a target neighborhood point corresponding to the starting point meets a first preset condition or not; if yes, go to S141; if not, executing S145;
s141: calculating a step value delta step according to a formula delta step-rect _ width rho;
s142: judging whether the target neighborhood point is out of range or not; if yes, go to S143; if not, executing S144;
s143: selecting neighborhood points meeting a second preset condition from neighborhood points corresponding to the starting point as target points according to the gray difference matrix;
s144: determining a target point according to the step value and the gray difference matrix;
s145: taking the target neighborhood point as a target point,
s146: judging whether the target point reaches a third preset condition or not;
if yes, ending the operation; if not, executing S147;
s147: the target point is taken as a starting point, and the process returns to the step S140.
Optionally, the method further includes:
judging whether the video frames of the two paths of videos are completely fused or not;
if not, the process returns to S11.
The embodiment of the invention also provides an image fusion device, which comprises a computing unit, a determining unit and a selecting unit,
the computing unit is used for computing the overlapping area of the two paths of videos;
the computing unit is further configured to compute all motion regions corresponding to the overlap region by using a frame difference method; calculating the amount of motion of the motion area according to the width value and the height value of the motion area and the number of the motion pixels;
the determining unit is used for determining the starting point of each suture line according to the overlapping area and the motion area;
the computing unit is further used for computing a gray difference matrix of the overlapping area;
the selection unit is used for selecting a target point which accords with a preset rule according to the starting point, the motion amount and the gray level difference matrix; determining the suture lines corresponding to all the starting points according to the starting points and the target points;
the selecting unit is also used for selecting one suture line meeting preset conditions from the suture lines as a target suture line; and realizing the fusion of the video frames according to the target suture line.
Optionally, the calculating unit is specifically configured to calculate the value according to a formula
The amount of movement p of the area of movement is calculated,
wherein, move _ pix represents the number of moving pixels in the moving area; rect _ width represents the width value of the motion area; rect _ height represents the height value of the motion area.
Optionally, the determining unit is specifically configured to select, according to the height value of the overlapping area, all pixel points corresponding to the height value in the overlapping area; and selecting the pixel points which do not belong to the motion area from the pixel points as the starting points.
Optionally, the selecting unit includes a first judging subunit, a calculating subunit, a selecting subunit, a determining subunit, as a subunit and a second judging subunit,
the first judging subunit is configured to judge whether a target neighborhood point corresponding to the starting point meets a first preset condition; if yes, triggering the calculating subunit; if not, triggering the serving subunit;
the calculating subunit calculates a step value Δ step according to a formula Δ step _ rect _ width ρ;
the first judging subunit is further configured to judge whether the target neighborhood point is out of range; if yes, triggering the selected subunit; if not, triggering the determining subunit;
the selecting subunit is used for selecting a neighborhood point meeting a second preset condition from the neighborhood points corresponding to the starting point according to the gray difference matrix to serve as a target point;
the determining subunit is configured to determine a target point according to the step value and the gray level difference matrix;
the serving subunit is configured to serve the target neighborhood point as a target point,
the second judging subunit is configured to judge whether the target point meets a third preset condition;
if yes, ending the operation; if not, the target point is taken as a starting point, and the first judgment subunit is returned.
Optionally, the system further comprises a judging unit,
the judging unit is used for judging whether the video frames of the two paths of videos are fused or not;
if not, the calculation unit is triggered.
According to the technical scheme, all motion areas corresponding to the overlapping areas can be calculated by calculating the overlapping areas of the video frames in the two videos and then utilizing a frame difference method. After the motion area is determined, the motion amount of the motion area can be calculated according to the width value, the height value and the number of the motion pixels of the motion area. According to the overlapping area and the motion area, the starting point of each suture line can be determined; by calculating the gray difference matrix of the overlapping area and according to the starting point and the motion amount, a target point which accords with a preset rule can be selected; determining a suture line corresponding to each starting point according to the starting point and the target point; one suture line meeting the preset conditions is selected from the suture lines, namely, an optimal suture line is selected as a target suture line, and the fusion of the video frames can be realized according to the target suture line. Therefore, the motion area in the video frame is effectively avoided when the target suture line is determined, so that the phenomena of overlapping, dislocation, ghost shadow and the like in the spliced image can be effectively avoided according to the image fused by the suture line, and the accuracy of image splicing is improved.
Drawings
In order to illustrate the embodiments of the present invention more clearly, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a flowchart of an image fusion method according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for selecting a target point according to an embodiment of the present invention;
fig. 3a is a schematic diagram of a frame of image captured by the left camera according to the embodiment of the present invention;
FIG. 3b is a diagram of a frame of image captured by the right camera according to an embodiment of the present invention;
FIG. 3c is a schematic view of an image of FIG. 3a and FIG. 3b fused to a target suture according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without any creative work belong to the protection scope of the present invention.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Next, a method for image fusion according to an embodiment of the present invention is described in detail. Fig. 1 is a flowchart of an image fusion method according to an embodiment of the present invention, where the method includes:
s10: and calculating the overlapping area of the video frames in the two paths of videos.
In the embodiment of the invention, the two videos can be subjected to feature point detection and registration to solve the overlapping Region (ROI) of the video frames in the two videos.
S11: calculating all motion areas corresponding to the overlapping areas by using a frame difference method; and calculating the amount of motion of the motion area according to the width value and the height value of the motion area and the number of the motion pixels.
After the motion area corresponding to the overlap area is calculated, the width value, the height value and the number of the motion pixels of the motion area can be obtained.
In the embodiment of the present invention, the calculated motion region may be set to moveArea [ k ] (where k is 0,1,2 … n-1), where n denotes the number of the calculated motion regions. The data structure of moveArea is:
wherein pix _ x represents the x-direction coordinate of the pixel in the upper left corner of the motion region; pix _ y represents the coordinate of the pixel y direction in the upper left corner of the motion region; the rect _ width indicates the width value of the motion area, and the rect _ height indicates the height value of the motion area. All of the above variables are in pixel units.
In the embodiment of the present invention, the movement amount of the movement region may be calculated according to the following formula, and the movement amount may be represented by ρ.
Where, move _ pix is the number of motion pixels in the motion region.
In order to facilitate the subsequent selection of the target suture line, the motion area information structure moveAreaInfo may be filled in the motion area. The motion region information structure is:
here, move _ area refers to a move area structure, and move _ ratio is the magnitude of the calculated motion amount.
S12: and determining the starting point of each suture line according to the overlapping area and the motion area.
In the embodiment of the present invention, all the pixel points corresponding to the height value in the overlapping area may be selected according to the height value of the overlapping area.
For example, all pixels selected from the overlap region are p (j, height) (j is 0,1,2, … width-1)
Wherein, width is the width value of ROI in pixel unit, and height is the height value of ROI area.
By traversing all the motion areas moveArea [ k ], pixel points which do not belong to the motion areas can be selected and taken as starting points.
Specifically, the selection can be performed according to the following formula,
p(j,height)∈moveArea(k)
when the pixel point satisfies the formula, the pixel point cannot be used as a starting point, and when the pixel point does not satisfy the formula, the pixel point can be used as a starting point.
S13: and calculating a gray difference matrix of the overlapped area.
The target point is selected subsequently according to the gray difference matrix in the overlapping area. In order to facilitate the subsequent quick selection of the target point, the gray difference matrix M of the overlapping area may be calculated first.
S14: selecting a target point which accords with a preset rule according to the starting point, the motion amount and the gray difference matrix; and determining the suture lines corresponding to all the starting points according to the starting points and the target points.
It can be known from the introduction of S12 that the value of the vertical coordinate of the pixel corresponding to the start point is the height value of the ROI area, and when selecting the target point, the target point can be selected in a manner that the vertical coordinate of the start point is decreased according to the start point.
Next, a description will be given of a specific operation of selecting a target point by taking a starting point as an example. As shown in fig. 2, this step includes:
s140: judging whether a target neighborhood point corresponding to the starting point meets a first preset condition or not; if yes, go to S141; if not, S145 is executed.
The neighborhood point may be a pixel point adjacent to the starting point, and since the ordinate of the starting point is the height value of the ROI region, there are 5 adjacent pixel points.
In the embodiment of the present invention, the width value of the pixel point may be used as the abscissa, and the height value of the pixel point may be used as the ordinate. The target neighborhood point may be a pixel point having a width value equal to the starting point and a height value lower than the starting point.
In the embodiment of the present invention, p (j, i) may be used to represent the starting point, and the neighborhood points of the starting point are p (j-1, i), p (j +1, i), p (j, i-1), p (j-1, i-1), and p (j +1, i-1), respectively, where p (j, i-1) is the target neighborhood point of the starting point p (j, i).
The first preset condition may be a condition for judging whether the starting point needs to be horizontally moved.
In the embodiment of the present invention, the determination may be made according to the following formula,
p(j,i-1)∈moveArea(k)
when the target point is selected for the first time, i is height. When the target neighborhood point p (j, i-1) belongs to a pixel point of the motion region movearea (k), it indicates that horizontal movement is required, i.e. S141 is executed; when the target neighborhood point p (j, i-1) does not belong to the pixel point of the motion region movearea (k), it indicates that the horizontal movement is not required, and then S145 is performed.
S141: the step value Δ step is calculated according to the formula Δ step — rect _ width ρ.
The width value rect _ width of the motion region has been acquired in S11, and the motion amount ρ of the motion region is calculated,
in an embodiment of the present invention, the step value may be calculated according to the following formula,
Δstep=rect_width*ρ
where Δ step represents the step value.
S142: judging whether the target neighborhood point is out of range or not; if yes, go to S143; if not, go to S144.
In the embodiment of the invention, the following two formulas can be used as the basis for judging whether the target neighborhood point is out of range,
(j+Δstep)>ROI_WIDTH (1)
(j-Δstep)<0 (2)
when the formula (1) or the formula (2) is satisfied, the target neighborhood point is out of range, and S143 is executed; when neither formula (1) nor formula (2) is satisfied, it indicates that the target neighborhood point is not out of range, and S144 is executed.
S143: and selecting the neighborhood points meeting a second preset condition from the neighborhood points corresponding to the starting point as target points according to the gray difference matrix.
The second preset condition may be to select a pixel point corresponding to the minimum value of the adjacent points in the gray scale difference matrix.
Specifically, the pixel point corresponding to the minimum value of M (j-1, i), M (j +1, i), M (j, i-1), M (j-1, i-1), and M (j +1, i-1) in the gray level difference matrix may be used as the next search point, i.e., the target point.
S144: and determining a target point according to the step value and the gray difference matrix.
Specifically, a point corresponding to the minimum value of M (i, j- Δ step) and M (i, j + Δ step) in the grayscale difference matrix may be taken as a next search point, i.e., a target point.
S145: and taking the target neighborhood point as a target point.
S146: and judging whether the target point reaches a third preset condition.
The third preset may be a condition for judging whether to end the above-described operations S140 to S145.
In the initial state, the height value i of the starting point is height, and the value of i is decreased by 1 every time a target point is selected, and the third preset condition may be to determine whether the height value of the target point is 0, that is, to determine whether i is 0.
When the third preset condition is satisfied, it indicates that all the target points corresponding to the starting point have been selected, and the operation flow of S140 to S145 may be ended.
If the third preset condition is not satisfied, it indicates that all the target points corresponding to the start point have not been selected, and S147 may be executed.
S147: the target point is taken as a starting point, and the process returns to the step S140.
When all the target points corresponding to the starting point are selected, the starting point and the corresponding target points are connected to obtain a suture line.
In the above description, taking one starting point as an example, referring to the above manner, the target points corresponding to the remaining other starting points may be sequentially selected, so as to determine the suture lines corresponding to all the starting points.
S15: selecting one suture line meeting preset conditions from the suture lines as a target suture line; and realizing the fusion of the video frames according to the target suture line.
Both the starting point and the target point can be regarded as pixel points. When the number of the pixel points contained in one suture line is less, the performance of the suture line is better, and the image spliced according to the suture line is more coherent and natural.
The preset condition may be a basis for selecting an optimal suture line from a plurality of suture lines. In the embodiment of the present invention, the preset condition may be that a suture line including the least number of pixel points is selected, and the suture line is a target suture line.
Referring to fig. 3a and 3b, taking two cameras horizontally arranged left and right as an example, fig. 3a is an image taken by the left camera, and fig. 3b is an image taken by the right camera, it can be seen from the figures that both the images contain pedestrians, and the area where the pedestrians are located belongs to the motion area. According to the method provided by the embodiment of the invention, a target suture line of the two images can be selected, as shown in fig. 3c, the image fused according to the suture line is shown, and as can be seen from fig. 3c, the suture line effectively avoids the motion area in the image, so that the fused image is more coherent and natural.
According to the technical scheme, all motion areas corresponding to the overlapping areas can be calculated by calculating the overlapping areas of the video frames in the two videos and then utilizing a frame difference method. After the motion area is determined, the motion amount of the motion area can be calculated according to the width value, the height value and the number of the motion pixels of the motion area. According to the overlapping area and the motion area, the starting point of each suture line can be determined; by calculating the gray difference matrix of the overlapping area and according to the starting point and the motion amount, a target point which accords with a preset rule can be selected; determining a suture line corresponding to each starting point according to the starting point and the target point; one suture line meeting the preset conditions is selected from the suture lines, namely, an optimal suture line is selected as a target suture line, and the fusion of the video frames can be realized according to the target suture line. Therefore, the motion area in the video frame is effectively avoided when the target suture line is determined, so that the phenomena of overlapping, dislocation, ghost shadow and the like in the spliced image can be effectively avoided according to the image fused by the suture line, and the accuracy of image splicing is improved.
In the above description, the fusion of two video frames is taken as an example for development, and with reference to the above manner, the fusion of all video frames in two paths of videos can be completed. In the embodiment of the present invention, the above operation may be ended by determining whether the video frames of the two videos are completely fused. When the two videos are not fused, the process returns to S11, and repeats the above operations until the fusion of all the video frames in the two videos is completed.
Fig. 4 is a schematic structural diagram of an image fusion apparatus provided in an embodiment of the present invention, including a calculating unit 41, a determining unit 42 and a selecting unit 43,
the calculating unit 41 is configured to calculate an overlapping area of the two paths of videos;
the calculating unit 41 is further configured to calculate all motion regions corresponding to the overlapping region by using a frame difference method; calculating the amount of motion of the motion area according to the width value and the height value of the motion area and the number of the motion pixels;
the determining unit 42 is configured to determine a starting point of each suture line according to the overlapping area and the motion area;
the calculation unit 41 is further configured to calculate a gray difference matrix of the overlapping region;
the selecting unit 43 is configured to select a target point meeting a preset rule according to the starting point, the motion amount, and the gray level difference matrix; determining the suture lines corresponding to all the starting points according to the starting points and the target points;
the selecting unit 43 is further configured to select one suture line meeting a preset condition from the suture lines as a target suture line; and realizing the fusion of the video frames according to the target suture line.
Optionally, the calculating unit is specifically configured to calculate the value according to a formula
The amount of movement p of the area of movement is calculated,
wherein, move _ pix represents the number of moving pixels in the moving area; rect _ width represents the width value of the motion area; rect _ height represents the height value of the motion area.
Optionally, the determining unit is specifically configured to select, according to the height value of the overlapping area, all pixel points corresponding to the height value in the overlapping area; and selecting the pixel points which do not belong to the motion area from the pixel points as the starting points.
Optionally, the selecting unit includes a first judging subunit, a calculating subunit, a selecting subunit, a determining subunit, as a subunit and a second judging subunit,
the first judging subunit is configured to judge whether a target neighborhood point corresponding to the starting point meets a first preset condition; if yes, triggering the calculating subunit; if not, triggering the serving subunit;
the calculating subunit calculates a step value Δ step according to a formula Δ step _ rect _ width ρ;
the first judging subunit is further configured to judge whether the target neighborhood point is out of range; if yes, triggering the selected subunit; if not, triggering the determining subunit;
the selecting subunit is used for selecting a neighborhood point meeting a second preset condition from the neighborhood points corresponding to the starting point according to the gray difference matrix to serve as a target point;
the determining subunit is configured to determine a target point according to the step value and the gray level difference matrix;
the serving subunit is configured to serve the target neighborhood point as a target point,
the second judging subunit is configured to judge whether the target point meets a third preset condition;
if yes, ending the operation; if not, the target point is taken as a starting point, and the first judgment subunit is returned.
Optionally, the system further comprises a judging unit,
the judging unit is used for judging whether the video frames of the two paths of videos are fused or not;
if not, the calculation unit is triggered.
For the description of the features in the embodiment corresponding to fig. 4, reference may be made to the related description of the embodiments corresponding to fig. 1 and fig. 2, which is not repeated here.
According to the technical scheme, all motion areas corresponding to the overlapping areas can be calculated by calculating the overlapping areas of the video frames in the two videos and then utilizing a frame difference method. After the motion area is determined, the motion amount of the motion area can be calculated according to the width value, the height value and the number of the motion pixels of the motion area. According to the overlapping area and the motion area, the starting point of each suture line can be determined; by calculating the gray difference matrix of the overlapping area and according to the starting point and the motion amount, a target point which accords with a preset rule can be selected; determining a suture line corresponding to each starting point according to the starting point and the target point; one suture line meeting the preset conditions is selected from the suture lines, namely, an optimal suture line is selected as a target suture line, and the fusion of the video frames can be realized according to the target suture line. Therefore, the motion area in the video frame is effectively avoided when the target suture line is determined, so that the phenomena of overlapping, dislocation, ghost shadow and the like in the spliced image can be effectively avoided according to the image fused by the suture line, and the accuracy of image splicing is improved.
The method and the device for image fusion provided by the embodiment of the invention are described in detail above. The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Claims (8)
1. A method of image fusion, comprising:
s10: calculating the overlapping area of the video frames in the two paths of videos;
s11: calculating all motion areas corresponding to the overlapping areas by using a frame difference method; calculating the amount of motion of the motion area according to the width value and the height value of the motion area and the number of the motion pixels;
s12: determining a starting point of each suture line according to the overlapping area and the motion area;
s13: calculating a gray difference matrix of the overlapping area;
s14: selecting a target point which accords with a preset rule according to the starting point, the motion amount and the gray difference matrix; determining the suture lines corresponding to all the starting points according to the starting points and the target points;
s15: selecting one suture line meeting preset conditions from the suture lines as a target suture line; realizing the fusion of the video frames according to the target suture line;
in the S11, including:
according to the formula
The amount of movement p of the area of movement is calculated,
wherein, move _ pix represents the number of moving pixels in the moving area; rect _ width represents the width value of the motion area; rect _ height represents the height value of the motion area.
2. The method according to claim 1, wherein in the S12 includes:
selecting all pixel points corresponding to the height values in the overlapping area according to the height values of the overlapping area;
and selecting the pixel points which do not belong to the motion area from the pixel points as starting points.
3. The method according to claim 2, wherein in the S14 includes:
s140: judging whether a target neighborhood point corresponding to the starting point meets a first preset condition or not; if yes, go to S141; if not, executing S145;
s141: calculating a step value delta step according to a formula delta step-rect _ width rho;
s142: judging whether the target neighborhood point is out of range or not; if yes, go to S143; if not, executing S144;
s143: selecting neighborhood points meeting a second preset condition from neighborhood points corresponding to the starting point as target points according to the gray difference matrix;
s144: determining a target point according to the step value and the gray difference matrix;
s145: taking the target neighborhood point as a target point,
s146: judging whether the target point reaches a third preset condition or not;
if yes, ending the operation; if not, executing S147;
s147: the target point is taken as a starting point, and the process returns to the step S140.
4. The method of any one of claims 1-3, further comprising:
judging whether the video frames of the two paths of videos are completely fused or not;
if not, the process returns to S11.
5. An image fusion device is characterized by comprising a calculation unit, a determination unit and a selection unit,
the computing unit is used for computing the overlapping area of the video frames in the two paths of videos;
the computing unit is further configured to compute all motion regions corresponding to the overlap region by using a frame difference method; calculating the amount of motion of the motion area according to the width value and the height value of the motion area and the number of the motion pixels;
the determining unit is used for determining the starting point of each suture line according to the overlapping area and the motion area;
the computing unit is further used for computing a gray difference matrix of the overlapping area;
the selection unit is used for selecting a target point which accords with a preset rule according to the starting point, the motion amount and the gray level difference matrix; determining the suture lines corresponding to all the starting points according to the starting points and the target points;
the selecting unit is also used for selecting one suture line meeting preset conditions from the suture lines as a target suture line; realizing the fusion of the video frames according to the target suture line;
the calculation unit is specifically configured to calculate the value of the equation
The amount of movement p of the area of movement is calculated,
wherein, move _ pix represents the number of moving pixels in the moving area; rect _ width represents the width value of the motion area; rect _ height represents the height value of the motion area.
6. The apparatus according to claim 5, wherein the determining unit is specifically configured to select all pixel points corresponding to the height values in the overlapping area according to the height values of the overlapping area; and selecting the pixel points which do not belong to the motion area from the pixel points as the starting points.
7. The apparatus of claim 6, wherein the selecting unit comprises a first judging subunit, a calculating subunit, a selecting subunit, a determining subunit, and a second judging subunit,
the first judging subunit is configured to judge whether a target neighborhood point corresponding to the starting point meets a first preset condition; if yes, triggering the calculating subunit; if not, triggering the serving subunit;
the calculating subunit calculates a step value Δ step according to a formula Δ step _ rect _ width ρ;
the first judging subunit is further configured to judge whether the target neighborhood point is out of range; if yes, triggering the selected subunit; if not, triggering the determining subunit;
the selecting subunit is used for selecting a neighborhood point meeting a second preset condition from the neighborhood points corresponding to the starting point according to the gray difference matrix to serve as a target point;
the determining subunit is configured to determine a target point according to the step value and the gray level difference matrix;
the serving subunit is configured to serve the target neighborhood point as a target point,
the second judging subunit is configured to judge whether the target point meets a third preset condition;
if yes, ending the operation; if not, the target point is taken as a starting point, and the first judgment subunit is returned.
8. The apparatus according to any one of claims 5 to 7, further comprising a judging unit,
the judging unit is used for judging whether the video frames of the two paths of videos are fused or not;
if not, the calculation unit is triggered.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710538727.9A CN107346536B (en) | 2017-07-04 | 2017-07-04 | Image fusion method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710538727.9A CN107346536B (en) | 2017-07-04 | 2017-07-04 | Image fusion method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107346536A CN107346536A (en) | 2017-11-14 |
CN107346536B true CN107346536B (en) | 2020-08-11 |
Family
ID=60256865
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710538727.9A Active CN107346536B (en) | 2017-07-04 | 2017-07-04 | Image fusion method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107346536B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583357A (en) * | 2020-05-20 | 2020-08-25 | 重庆工程学院 | Object motion image capturing and synthesizing method based on MATLAB system |
CN112200727B (en) * | 2020-11-06 | 2023-11-21 | 星宸科技股份有限公司 | Image stitching device, image processing chip, and image stitching method |
CN113344787B (en) * | 2021-06-11 | 2022-02-01 | 北京中交华安科技有限公司 | Optimal suture line automatic adjustment algorithm, traffic early warning method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101414379A (en) * | 2007-10-17 | 2009-04-22 | 日电(中国)有限公司 | Apparatus and method for generating panorama image |
CN101859433A (en) * | 2009-04-10 | 2010-10-13 | 日电(中国)有限公司 | Image mosaic device and method |
WO2014054958A3 (en) * | 2012-10-05 | 2014-07-24 | Universidade De Coimbra | Method for aligning and tracking point regions in images with radial distortion that outputs motion model parameters, distortion calibration, and variation in zoom |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9262684B2 (en) * | 2013-06-06 | 2016-02-16 | Apple Inc. | Methods of image fusion for image stabilization |
EP3089449B1 (en) * | 2015-04-30 | 2020-03-04 | InterDigital CE Patent Holdings | Method for obtaining light-field data using a non-light-field imaging device, corresponding device, computer program product and non-transitory computer-readable carrier medium |
-
2017
- 2017-07-04 CN CN201710538727.9A patent/CN107346536B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101414379A (en) * | 2007-10-17 | 2009-04-22 | 日电(中国)有限公司 | Apparatus and method for generating panorama image |
CN101859433A (en) * | 2009-04-10 | 2010-10-13 | 日电(中国)有限公司 | Image mosaic device and method |
WO2014054958A3 (en) * | 2012-10-05 | 2014-07-24 | Universidade De Coimbra | Method for aligning and tracking point regions in images with radial distortion that outputs motion model parameters, distortion calibration, and variation in zoom |
Non-Patent Citations (1)
Title |
---|
结合最佳缝合线和多分辨率融合的图像拼接;谷雨 等;《中国图像图形学报》;20170616;第22卷(第6期);参见图1,第2节 * |
Also Published As
Publication number | Publication date |
---|---|
CN107346536A (en) | 2017-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9277129B2 (en) | Robust image feature based video stabilization and smoothing | |
JP2019053732A (en) | Dynamic generation of image of scene based on removal of unnecessary object existing in the scene | |
CN107087107A (en) | Image processing apparatus and method based on dual camera | |
JP5572299B2 (en) | Automatic focus adjustment method and apparatus for image acquisition device | |
JP5803467B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
CN107346536B (en) | Image fusion method and device | |
JP6308748B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
CN106447602A (en) | Image mosaic method and device | |
CN110035206B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN103067656A (en) | Imaging device and imaging method | |
US20120057747A1 (en) | Image processing system and image processing method | |
CN110276714B (en) | Method and device for synthesizing rapid scanning panoramic image | |
JP6395429B2 (en) | Image processing apparatus, control method thereof, and storage medium | |
JP2017220885A (en) | Image processing system, control method, and control program | |
JP2007122328A (en) | Distortion aberration correction device and distortion aberration correction method | |
CN107251089B (en) | Image processing method for motion detection and compensation | |
US10805609B2 (en) | Image processing apparatus to generate panoramic image, image pickup apparatus to generate panoramic image, control method of image processing apparatus to generate panoramic image, and non-transitory computer readable storage medium to generate panoramic image | |
JP6961423B2 (en) | Image processing equipment, imaging equipment, control methods for image processing equipment, programs and recording media | |
JP5473836B2 (en) | Parking detection device, parking detection method, and parking detection program | |
JPH1062154A (en) | Processing method of measured value, method and device for shape reconstruction | |
JP2009044361A (en) | Image processor, and image processing method | |
JP2020150448A (en) | Image pickup apparatus, control method therefor, program, and storage medium | |
JP7346021B2 (en) | Image processing device, image processing method, imaging device, program and recording medium | |
CN117152400B (en) | Method and system for fusing multiple paths of continuous videos and three-dimensional twin scenes on traffic road | |
JP6833772B2 (en) | Image processing equipment, imaging equipment, image processing methods and programs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |