WO2017088533A1 - 合并图像的方法和装置 - Google Patents

合并图像的方法和装置 Download PDF

Info

Publication number
WO2017088533A1
WO2017088533A1 PCT/CN2016/095880 CN2016095880W WO2017088533A1 WO 2017088533 A1 WO2017088533 A1 WO 2017088533A1 CN 2016095880 W CN2016095880 W CN 2016095880W WO 2017088533 A1 WO2017088533 A1 WO 2017088533A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
image
region
optical flow
flow vector
Prior art date
Application number
PCT/CN2016/095880
Other languages
English (en)
French (fr)
Inventor
罗骜
程洪
田勇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2017088533A1 publication Critical patent/WO2017088533A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2622Signal amplitude transition in the zone between image portions, e.g. soft edges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present invention relates to the field of image processing, and in particular, to a method and apparatus for merging images.
  • panoramic photography of life images in daily life, merging of satellite images and aerial images, etc. are required to seamlessly splicing images taken from different viewpoints into a single wide-field, high-resolution panoramic mosaic image.
  • a common camera array is generally used to acquire a set of low-resolution or small-view images with overlapping regions, and then images from multiple viewpoints are combined, and spliced and fused to form a new high-resolution, wide-angle view.
  • the image, the merged image includes all the information of the image before stitching, and looks like it was taken at a viewpoint.
  • Embodiments of the present invention provide a method and apparatus for merging images to reduce blur or ghosting of a merged image due to parallax.
  • an embodiment of the present invention provides a method for merging images, the method comprising: acquiring two images to be merged, the two images being images respectively collected from two viewpoints, the two images having overlapping a region; the overlapping region is divided into at least two candidate fusion regions, and each of the at least two candidate fusion regions has a line dividing each image of the two images into two parts that are not connected; Determining each of the candidates based on the optical flow vector between the two images Selecting an optical flow vector mapping error corresponding to the fused region, wherein the optical flow vector mapping error corresponding to each candidate fused region is used to indicate that each sub-image of the two images in each candidate fused region is based on The optical flow vector corresponds to an error between images at the same viewpoint; and the target of the two images is selected from the at least two candidate fusion regions according to the optical flow vector mapping error corresponding to each of the at least two candidate fusion regions a fusion region; the two images are fused in the target fusion region to obtain a combined image
  • the overlapping area of the image to be merged is divided into a plurality of fused regions, and by determining the optical flow vector mapping error corresponding to each of the plurality of candidate fused regions, the optical flow vector mapping of the sub-images of the merged image in the overlapping region can be performed (or The optical flow field mapping error is estimated, so that the target fusion region can be selected from the candidate fusion regions according to the error estimation result, and the two images are merged in the target fusion region, thereby obtaining a combined image of the two images, which can not only reduce
  • the merged image is blurred or ghosted due to parallax, and the amount of calculation of image merging can be saved.
  • the method further includes: combining the two images in the target fusion region to obtain a merged image of the two images, including: And acquiring, according to the optical flow vector, a mapping image of the two overlapping sub-images respectively corresponding to the intermediate viewpoints of the two viewpoints, wherein the coordinate value of the intermediate viewpoint is an average of coordinate values of the two viewpoints, and the two overlapping sub-images are An overlapping sub-image of the two images in the overlapping region; the two mapping images are fused in the target fusion region to obtain a fused sub-image; and the two overlapping sub-images are excluded from the two images The outer sub-image and the fused sub-image are spliced to obtain the merged image.
  • the method further includes: the two images include a first image and a second image, The first image corresponds to the first view point, and the second image corresponds to the second view point, and the optical flow vector corresponding to each candidate fusion region of the at least two candidate fusion regions is determined based on the optical flow vector between the two images Mapping the error, comprising: acquiring an image of the overlapping sub-image in the overlapping region corresponding to the second viewpoint in the first image based on the optical flow vector; determining the image corresponding to the second viewpoint and the second image a first occlusion image between overlapping sub-images of the overlapping region; based on Obtaining, by the optical flow vector, a second occlusion image of the first occlusion image corresponding to a middle view point of the two view points; according to an area for indicating occlusion in each candidate fusion region in the second occlusion image Information, determining an optical flow vector corresponding to each candidate fusion region of the second occlusion image Information, determining an optical flow vector
  • the target fusion region can be selected according to the optical flow vector mapping error.
  • the method further includes: the area for indicating occlusion includes a pixel point for indicating occlusion, the basis
  • Each candidate fusion region corresponds to a distribution of regions for indicating occlusion in an area in the second occlusion image
  • determining an optical flow vector mapping error corresponding to each candidate fusion region includes: determining each candidate fusion The region corresponds to a sum of pixel values of the pixel points indicating the occlusion in the region in the second occlusion image; the sum of the pixel values is used as an optical flow vector mapping error corresponding to each candidate fusion region.
  • the method further includes: And selecting, according to the optical flow vector mapping error corresponding to the at least two candidate fusion regions, the fusion region of the two images from the at least two candidate fusion regions, including: corresponding optical flow vectors in the at least two candidate fusion regions The candidate fusion regions with the smallest mapping error are determined as the target fusion regions of the two images.
  • the candidate fusion region with the smallest optical flow vector mapping error as the target fusion region of the two images, and finally merging the two images to be merged in the target fusion region, the fusion quality of the merged image is improved, and the parallax caused by the parallax is reduced. Blur and ghosting.
  • the method further includes: And selecting, according to the optical flow vector mapping error corresponding to the at least two candidate fused regions, the fused region of the two images from the at least two candidate fused regions, including: corresponding optical flows in the at least two candidate fused regions The candidate fusion region whose vector mapping error is smaller than the preset threshold is determined as the target fusion region of the two images.
  • the fusion quality of the merged image is improved, and the fusion quality is reduced. Blurring and ghosting due to parallax.
  • the method further includes: The sum of the areas of the at least two candidate fusion regions is smaller than the area of the overlapping region.
  • the calculation of the optical flow vector mapping error and the calculation amount when the image is fused can be reduced, and the efficiency and processing speed of the composite image can be improved.
  • an embodiment of the present invention provides an apparatus for merging images, where the apparatus includes: an acquiring module, configured to acquire two images to be merged, the two images being images respectively collected from two viewpoints, The two images have overlapping regions, and a determining module is configured to divide the overlapping region into at least two candidate fusion regions, and each of the two images is present in each of the at least two candidate fusion regions The image is divided into two parts of the line that are not connected; and the optical flow vector mapping error corresponding to each candidate fusion region is determined based on the optical flow vector between the two images, wherein the optical flow corresponding to each candidate fusion region The vector mapping error is used to indicate that each sub-image of the two images in the each candidate fusion region is respectively based on the optical flow vector, corresponding to an error between images at the same viewpoint; and a selection module is configured according to the An optical flow vector mapping error corresponding to each of the at least two candidate fusion regions, and selecting a target fusion region of the two images from the at least two candidate fusion
  • the overlapping area of the image to be merged is divided into a plurality of fused regions, and by determining the optical flow vector mapping error corresponding to each of the plurality of candidate fused regions, the optical flow vector mapping of the sub-images of the merged image in the overlapping region can be performed (or The optical flow field mapping error is estimated, so that the target fusion region can be selected from the candidate fusion regions according to the error estimation result, and the two images are merged in the target fusion region, thereby obtaining a combined image of the two images, which can not only reduce
  • the merged image is blurred or ghosted due to parallax, and the amount of calculation of image merging can be saved.
  • the merging module is configured to: obtain, according to the optical flow vector, a mapping of two overlapping sub-images corresponding to intermediate views of the two views An image, the coordinate value of the intermediate viewpoint is an average of coordinate values of the two viewpoints, and the two overlapping sub-images are overlapping sub-images of the two images in the overlapping region;
  • the map image is fused in the target fused region to obtain a fused sub-image; the sub-images other than the two overlapping sub-images and the fused sub-image are spliced to obtain the merged image.
  • the two images include a first image and a second image, where the first image corresponds to the first a second view corresponding to the second view
  • the determining module is configured to: acquire, according to the optical flow vector, an image of the overlapping sub-image in the overlap region corresponding to the second view in the first image; Corresponding to the first occlusion image between the image of the second viewpoint and the overlapping sub-image of the overlapping region in the second image; acquiring the first occlusion image corresponding to the two viewpoints based on the optical flow vector a second occlusion image of the intermediate view; determining an optical flow vector mapping error corresponding to each of the candidate fused regions according to information of the region for indicating occlusion in the each candidate fused region in the second occlusion image.
  • the area for indicating occlusion includes a pixel point for indicating occlusion
  • the determining module is specifically configured to: Determining, by each candidate fusion region, a sum of pixel values of pixels for indicating occlusion in a region in the second occlusion image; using the sum of the pixel values as an optical flow vector corresponding to each candidate fusion region Mapping error.
  • the selecting module is specifically used to And determining, as the target fusion region of the two images, the candidate fusion region that minimizes the corresponding optical flow vector mapping error in the at least two candidate fusion regions.
  • the selecting module is specifically used to And determining, as the target fusion region of the two images, the candidate fusion regions of the at least two candidate fusion regions that have corresponding optical flow vector mapping errors smaller than a preset threshold.
  • an apparatus for merging images comprising a processor and a memory; the memory for storing code; the processor reading the code stored in the memory for performing the first aspect method.
  • FIG. 1 is a schematic flow chart of a method of merging images according to an embodiment of the present invention.
  • FIG. 2 is a comparison diagram of a fusion effect with the prior art according to still another embodiment of the present invention.
  • FIG. 3 is a diagram showing two images to be merged in accordance with another embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a process for obtaining a first occlusion image according to another embodiment of the present invention.
  • FIG. 5 is an effect diagram of a final merged image of a method of merging images according to another embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an occlusion image generated by a foreground object in accordance with yet another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a method for dividing a candidate fusion region of an overlap region according to still another embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a method for dividing a candidate fusion region of an overlap region according to still another embodiment of the present invention.
  • Figure 9 is a comparison diagram of the fusion effect with the prior art according to still another embodiment of the present invention.
  • FIG. 10 is a schematic block diagram of an apparatus for merging images according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of an apparatus for merging graphics in accordance with an embodiment of the present invention.
  • the method in the embodiment of the present invention takes the merging process of two images as an example. It is obvious that the method in the embodiment of the present invention can also be applied to an image fusion system in which multiple images are combined, and can also be applied to Image fusion system in the video field.
  • the images to be merged may be acquired by the camera array, or may be acquired by other image acquisition devices.
  • the two images to be merged in the embodiment of the present invention may be two adjacent cameras in the camera array. Two images captured by the camera in the same scene.
  • the images to be merged can be acquired from different viewpoints, ie different angles. Since these images are acquired from different viewpoints, when the images are combined, blurring or ghosting may occur due to parallax in the overlapping area of the image. Parallax is the difference in direction between observing the same target from two viewpoints. different. Parallax should be minimized during the process of merging images.
  • objects near the seam may be duplicated or missing, which is also called ghosting.
  • blurring or ghosting may occur during the image fusion process. Blurring or ghosting caused by parallax should be reduced or avoided in the process of merging images.
  • FIG. 1 shows a schematic flow chart of a method 100 of merging images according to an embodiment of the present invention. As shown in FIG. 1, the method 100 includes:
  • the overlapping area is divided into at least two candidate fusion areas, and each of the at least two candidate fusion areas has a line dividing each image of the two images into two parts that are not connected; And determining, according to the optical flow vector between the two images, an optical flow vector mapping error corresponding to each candidate fusion region, where the optical flow vector mapping error corresponding to each candidate fusion region is used to indicate each candidate in the candidate.
  • the sub-images of the two images in the fusion region are respectively based on the optical flow vector, corresponding to errors between images at the same viewpoint;
  • S130 Select, according to an optical flow vector mapping error corresponding to each of the at least two candidate fusion regions, a target fusion region of the two images from the at least two candidate fusion regions;
  • the two images are merged in the target fusion region to obtain a combined image of the two images.
  • the overlapping area of the image to be merged is divided into a plurality of fused regions, and by determining the optical flow vector mapping error corresponding to each of the plurality of candidate fused regions, the sub-images of the merged image in the overlapping region can be treated.
  • the error of the optical flow vector map (or optical flow field map) is estimated, so that the target fusion region can be selected from the candidate fusion regions according to the error estimation result, and the two images are fused in the target fusion region, thereby obtaining two images.
  • Combining the images not only reduces blur or ghosting of the merged image due to parallax, but also saves computational effort for image merging.
  • the optical flow vector mapping error corresponding to each candidate fusion region may include that one of the two images corresponds to the sub-image of each candidate fusion region based on the optical flow vector, and is mapped to the two images.
  • the optical flow vector mapping error of the viewpoint corresponding to another image may include a first image and a second image, and the sub-image corresponding to the first image corresponding to each candidate fusion region may be mapped to the corresponding viewpoint of the second image based on the optical flow vector, and the first image may be mapped.
  • the error between the subsequent sub-image and the corresponding sub-image of the second image is used as an optical flow vector mapping error for each candidate fusion region.
  • the sub-image corresponding to the second image corresponding to each candidate fusion region may be mapped to the viewpoint corresponding to the first image, and the sub-image mapped by the second image is corresponding to the first image, based on the optical flow vector.
  • the error between the sub-images is used as the optical flow vector mapping error for each candidate fusion region.
  • Each candidate fusion region may consider one of the two optical flow vector mapping errors, select a target fusion region from the at least two candidate fusion regions, or comprehensively consider two optical flow vector mapping error selections.
  • the optical flow vector mapping error of one of the image fusion regions is selected from the at least two candidate fusion regions, which is not specifically limited in the embodiment of the present invention.
  • the overlapping area is divided into at least two candidate fusion areas, and the number of candidate fusion areas may be 2-7.
  • the two images may respectively correspond to one view point, and the same view point may refer to a view point of each of the two images to be merged, or may refer to a view point (or intermediate view point) in the middle of the corresponding view points of the two images to be merged.
  • the optical flow vector between the two images may include an optical flow vector that transforms each of the two images to a viewpoint corresponding to the other image.
  • the two images to be merged may include a first image and a second image
  • the optical flow vector between the first image and the second image may include an optical flow vector used by the first image to be converted to a viewpoint corresponding to the second image.
  • the optical flow vector used by the second image to be converted to the corresponding viewpoint of the first image may also be included.
  • the optical flow vector mapping error of the candidate fusion region may indicate the accuracy of the transformation result of the image to be merged corresponding to the sub-image of the candidate fusion region undergoing optical flow vector transformation, and the accuracy of the transformation result is higher, the candidate fusion region
  • the corresponding optical flow vector mapping error is smaller. The smaller the optical flow vector mapping error corresponding to the candidate fusion region, the less the blur or ghost generated by the fusion of the two images in the candidate fusion region, and the higher the quality of the final merged image.
  • the target fusion area of the image that is, the area where the two images to be merged are merged.
  • the target fusion region can be selected to reduce the size of the fusion region while ensuring the fusion quality, thereby reducing the computational amount of the fused image and increasing the speed of the fused image.
  • the candidate fusion region with the smallest optical flow vector mapping error among the at least two candidate fusion regions may be selected as the target fusion region of the two images to be merged.
  • the candidate fusion region with the smallest optical flow vector mapping error corresponding to the optical image vector transformation of the sub-image of the first image in at least two candidate fusion regions may be selected as a fusion region, or may be selected.
  • the candidate fusion region with the smallest optical flow vector mapping error corresponding to the optical image vector transformation of the sub-image of the second image in the at least two candidate fusion regions is the fusion region.
  • the magnitudes of the two optical flow vector mapping errors corresponding to the same candidate fusion region are similar, and the magnitudes of the two optical flow vector mapping errors corresponding to different candidate fusion regions may be different. big. Therefore, when selecting a candidate fusion region with the smallest optical flow vector mapping error, only one optical flow vector mapping error corresponding to the candidate fusion region may be considered, and two optical flow vector mapping errors corresponding to the candidate fusion region may be considered simultaneously. the size of.
  • the candidate fusion region with the smallest optical flow vector mapping error is selected as the target fusion region of the two images, and finally the two to-be-combined images are merged in the target fusion region, thereby improving the fusion of the merged images. Quality reduces blur and ghosting due to parallax.
  • the candidate fused region whose optical flow vector mapping error is less than a preset threshold may be selected from the at least two candidate fused regions as the target fused region, and then the target candidate fused region is determined to be merged.
  • the fused area of the two images For example, when the two images to be merged include the first image and the second image, when the target fusion region is selected, the image of the at least two candidate fusion regions corresponding to the optical image of the first image may be selected.
  • the candidate fusion region whose optical flow vector mapping error is smaller than the preset threshold is the target candidate fusion region, and the optical flow vector mapping error corresponding to the optical image vector transformation of the sub-image of the first image in at least two candidate fusion regions may also be selected.
  • the candidate fusion region smaller than the preset threshold is the target fusion region, and the mapping errors of the two optical flow vectors corresponding to the sub-images of the two images in the at least two candidate fusion regions are smaller than the preset.
  • the candidate fusion region of the threshold is the target candidate fusion region, that is, the two optical flow vector mapping errors corresponding to the selected target candidate fusion region are less than a preset threshold.
  • the target fusion region may be determined from the eligible candidate fusion regions.
  • the eligible candidate fusion region is the fusion region.
  • the preset threshold may be set according to the quality requirement of the final merged image, or may be set according to experience, and the embodiment of the present invention is not limited thereto.
  • the candidate fusion region whose optical flow vector mapping error is less than the preset threshold is selected as the target fusion region of the two images, and finally the two to-be-combined images are merged in the target fusion region, thereby improving the merge.
  • the fusion quality of the image reduces the blurring caused by parallax ghosting.
  • the optical flow vector mapping error corresponding to the intermediate candidate fusion regions may be first calculated, and generally, according to experience, two The optical flow vector mapping error of the candidate fusion region on the side is smaller than the intermediate candidate fusion region error.
  • the optical flow vector mapping error of the candidate fusion regions on both sides may be considered to be smaller than the optical flow vector mapping error of the intermediate candidate fusion region.
  • M may be set as the sum of the pixel gray values of the occlusion pixel points of the first image in the intermediate candidate fusion region
  • N may be set to be the second The sum of the pixel gray values of the occlusion pixels in the middle candidate fusion region of the image.
  • M and N can be confirmed as the optical flow vector mapping error of the intermediate candidate fusion region, and M and N are compared with a preset threshold. When both M and N are smaller than a preset threshold, it may be confirmed that the intermediate candidate fusion region is the target fusion region.
  • the optical flow vector mapping error of the intermediate candidate fusion region satisfies the requirements for the combined image quality, it is not necessary to calculate the optical flow vector of other candidate fusion regions.
  • the error is mapped, thereby reducing the workload of calculating the optical flow vector mapping error. If any of M and N is greater than a preset threshold, it may be assumed that N>M>preset threshold, and one of the candidate fusion regions on both sides is selected as the target fusion region.
  • the optical flow vector estimation of the sub-image corresponding to the intermediate candidate fusion region in the second image may be considered to be inaccurate, so that the sub-image corresponding to the intermediate candidate fusion region in the second image is mapped to the first image by the optical flow vector.
  • the error of the viewpoint is large, so it is necessary to minimize the proportion of the second image in the fusion result. Therefore, the candidate fusion region close to the second image may be selected as the target fusion region.
  • the proportions of the first image and the second image in the target fusion region are each 50%.
  • the corresponding sub-image of the first image is transformed by the optical flow vector to the second mapping sub-image after the intermediate viewpoint, and the proportion of the first image is 100%. .
  • the blurring or ghosting of the final merged image due to the parallax is reduced, and the processing speed of the merged image is improved.
  • the two images are merged in the target fusion region, thereby obtaining a merged image of the two images.
  • the two images are merged in the target fusion region, thereby obtaining a merged image of the two images, including: Optical flow vector, obtaining two overlapping sub-images respectively corresponding to the two viewpoints a mapping image of the intermediate viewpoint, the coordinate value of the intermediate viewpoint is an average of coordinate values of the two viewpoints, and the two overlapping sub-images are overlapping sub-images of the two images in the overlapping region; mapping the two maps
  • the image is fused in the target fusion region to obtain a fused sub-image; the sub-images other than the two overlapping sub-images and the fused sub-image are spliced to obtain the merged image.
  • the mapping image corresponding to the intermediate view point is obtained by acquiring the two overlapping sub-images based on the optical flow vector, and the obtained mapped image is fused in the target fusion region, thereby obtaining a fusion based on the optical flow vector transformation fusion.
  • the image, which finally obtains the merged image can effectively reduce the blur or ghosting of the merged image due to parallax.
  • the two images to be merged may include a first image and a second image, and the sub-image corresponding to the fusion region in the first image may be transformed to the middle according to the optical flow vector between the first image and the second image.
  • the viewpoint, the sub-image corresponding to the fusion region in the second image is transformed to the intermediate viewpoint, and the two sub-images transformed to the intermediate viewpoint are subjected to fusion processing.
  • the fusion processing of the two sub-images may use a weighted average method, which is also called a feathering method. It is also possible to use a weighted average method of configuring a median filter, ie a feathering method of configuring a median filter.
  • the images to be merged may be combined according to the two images to be merged and the fusion sub-image to obtain a final merged image.
  • the image to be merged includes a first image and a second image, with the fusion region as a boundary, and for the first region in the overlap region that is close to the first image except the fusion region, determining the correspondence of the first image at the first
  • the sub-image in the region is based on the optical flow vector transformation, and the first mapping sub-image obtained after mapping to the intermediate view point determines the correspondence of the second image to the second region in the overlapping region that is close to the second image except the fusion region.
  • the sub-image in the second region is subjected to optical flow vector transformation and mapped to a second mapping sub-image obtained after the intermediate viewpoint. And then sub-images corresponding to regions other than the overlap region in the first image, sub-images corresponding to regions other than the overlap region in the second image, the fusion sub-image, the first map sub-image, and the second map sub-image stitching Together, get the merged image of the final merge.
  • the image to be merged may include a first image and a second image, and the coordinates of the pixel point of the first mapping sub-image of the first image may be set to (i, j), and the coordinate is away from the edge of the first image.
  • ⁇ 1 pixel the edge of the second image is ⁇ 2 pixels
  • the optical flow vector corresponding to the sub-image in the overlapping area corresponding to the first mapping sub-image in the first image is F(F i , F j )
  • the coordinates of the pixel points of the third map sub-image that map the first image to the intermediate viewpoint are determined according to the formula (1) by means of back projection.
  • the method for obtaining the pixel points of the fourth mapping sub-image obtained by mapping the second image to the intermediate viewpoint is corresponding thereto, and for brevity and convenience of description, details are not described herein again.
  • Formula (1) is:
  • (B i , B j ) represents the coordinates of the pixel point position of the third mapped sub-image, and is also the coordinate of the position corresponding to the pixel point of the fused sub-image. It may be set that the pixel value matching the fusion sub-image in the first image is c 1 , and the pixel value c 2 matching the fusion sub-image in the second image may be calculated by the weighted average method according to formula (2). The pixel value of the fused sub-image.
  • FIG. 2 is a view showing a comparison effect of the fusion effect between the embodiment of the present invention and the prior art.
  • (a) of FIG. 2 shows a sub-image corresponding to the overlap region obtained by the weighted average method
  • (b) of FIG. 2 shows a method for merging images using the embodiment of the present invention.
  • the method of merging images of the embodiment of the present invention can effectively reduce blurring or ghosting due to parallax as compared with the prior art.
  • the two mapping images are fused in the target fusion region, so that the specific implementation method of obtaining the fusion sub-image may also be implemented by using an improved feathering method of the median filtering method.
  • the fusion between the three-map sub-image and the fourth map sub-image, the median filtering method mainly uses the median filter to process the overlapping region pixels.
  • the median filter is applied to the region near the boundary.
  • the pixel is subjected to median filtering so that its value is close to the value of the surrounding pixel. Therefore, the discontinuity of the light intensity can be eliminated.
  • the median filtering method can highlight the moving target and maintain the original background in the scene where the moving target exists. Therefore, the improved feathering method of the median filtering method in the embodiment of the present invention can eliminate blur or ghosting and achieve better image combining effect.
  • the method 100 for merging images further includes: the two images include a first image and a second image, the first image corresponding to the first viewpoint, and the second image corresponding to the second viewpoint, the second image Determining, according to the optical flow vector between the two images, an optical flow vector mapping error corresponding to each candidate fusion region of the at least two candidate fusion regions, including: acquiring, in the first image, based on the optical flow vector An overlapping sub-image at the overlapping area corresponds to an image of the second viewpoint; determining a first occlusion image between the image corresponding to the second viewpoint and the overlapping sub-image of the overlapping region in the second image; Obtaining the first occlusion image corresponding to the optical flow vector a second occlusion image of the intermediate view of the two viewpoints; determining an optical flow vector corresponding to each candidate fusion region according to information of the region for indicating occlusion in the each candidate fusion region in the second occlusion image Mapping error.
  • the overlapping sub-image of the first image in the overlapping region is first acquired, and the image corresponding to the second viewing point is transformed based on the optical flow vector, and the image corresponding to the second viewing point and the second image are obtained in the overlapping region.
  • superimposing the first occlusion image between the sub-images and then acquiring a second occlusion image corresponding to the intermediate view according to the first occlusion image, and determining, according to the information of the second occlusion image for indicating the occlusion region, corresponding to each candidate fusion region
  • the optical flow vector maps the error so that the target fusion region can be selected according to the optical flow vector mapping error.
  • the distribution of the area of the second occlusion image in the candidate fused region for indicating occlusion may be represented by the pixel value of the pixel in the second occlusion map, and specifically, may be in the second occlusion image.
  • the occlusion pixel points are distributed to the non-occlusion pixel points. For example, the more occlusion pixels, the more occlusion regions representing the second occlusion image, indicating that the optical flow vector mapping error corresponding to each candidate fusion region is larger.
  • the optical flow vector mapping error determined according to the second occlusion image can more accurately indicate the accuracy of the optical flow vector transformation, thereby indicating that the image to be merged is fused in the candidate fusion region.
  • the quality of the combined image is good or bad.
  • the method for determining the first occlusion image is not limited in the embodiment of the present invention.
  • the first occlusion image may be determined by using an optimization algorithm, and the optimization algorithm used in the embodiment of the present invention is not specifically limited.
  • the first occlusion image may be determined by a method of obtaining an optimal result of the energy function according to a graph-cut method, and other methods for determining the occlusion map may be employed.
  • the first occlusion image may be determined by a method of finding an optimal result of the energy function according to a graph-cut method.
  • Figure 3 shows two images to be merged.
  • the image to be merged may be a first image corresponding to the left side and a second image corresponding to the right side.
  • I L (x, y) may be defined to represent a sub-image corresponding to the overlap region in the first image
  • I R (x, y) is defined to represent a sub-image corresponding to the overlap region in the second image.
  • the first mapping subgraph obtained by mapping I L (x, y) through the optical flow vector transformation to I R (x, y) can be expressed as I Lf (x, y).
  • the optimal result of the energy function E[f(x)] can be solved by Graph-cut method to obtain I Lf (x, y) and I R (
  • the first occlusion image of x, y) may be represented by ⁇ II Lf (x, y).
  • formula (3), formula (4) and formula (5) are:
  • E data [f(x)] represents a data item obtained by establishing a relationship between pixel values of corresponding coordinates of I Lf (x, y) and I R (x, y).
  • ⁇ occlusion represents the occlusion data item penalty value
  • ⁇ occlusiondistance represents the occlusion distance penalty value.
  • E smooth [f(x)] represents a smoothing term used to ensure the accuracy of occlusion.
  • 4 is a schematic diagram showing a process of obtaining a first occlusion image. As shown in FIG. 4, (a) of FIG.
  • FIG. 4 shows a sub-image I L (x, y) corresponding to an overlap region in the first image.
  • (b) of 4 shows the first map sub-image I Lf (x, y) of the first image
  • (c) of FIG. 4 shows I Lf (x, y) and I R (x, y)
  • the white pixel points in (c) of FIG. 4 represent occlusion pixels.
  • the first occlusion image of the first image may be mapped to the intermediate view point according to the optical flow vector W(u, v), and the second occlusion map is obtained, and according to the The occlusion region distribution in the two occlusion images determines the optical flow vector mapping error of the first image in each candidate fusion region, and FIG. 5 shows the effect of the final merged image of the method for merging the images in this example, from FIG. 5 It can be seen that the method of the embodiment of the present invention can effectively reduce blurring or ghosting due to parallax.
  • the area for indicating occlusion includes a pixel point for indicating occlusion
  • the second candidate fused area corresponds to the second Determining the distribution of the region for indicating the occlusion in the region in the occlusion image, and determining the optical flow vector mapping error corresponding to each candidate fused region, including: determining the region of each candidate fused region corresponding to the second occlusion image The sum of the pixel values of the pixel points indicating the occlusion; the sum of the pixel values is used as the optical flow vector mapping error corresponding to each candidate fusion region.
  • FIG. 6 shows a schematic diagram of an occlusion image generated by a foreground object, the foreground object It may be a moving object in which a white portion represents a background of an image, a dotted portion represents a foreground object, and a hatched portion represents an occlusion image generated by comparing (a) of FIG. 6 with (b) of FIG. 6, as shown in the figure.
  • the occlusion map may indicate the magnitude of the difference in the two images due to the presence of the moving object, and may also indicate the magnitude of the difference in the foreground object due to the presence of the parallax.
  • the occlusion pixel value of the occlusion map can be set to 1 and the non-occlusion pixel value is 0.
  • the optical flow vector mapping error can then be determined by calculating the sum of all pixel values within the candidate fusion region.
  • a sum of areas of the at least two candidate fusion regions is smaller than an area of the overlapping region.
  • FIGS. 7 and 8 show schematic diagrams of a method of dividing two candidate fusion regions of an overlap region, wherein portions of the oblique lines represent candidate fusion regions.
  • these candidate fusion regions may be arranged seamlessly with each other, that is, the sum of the area of the regions occupied by the candidate fusion regions is equal to the area of the overlapping region.
  • the candidate fusion regions may also be spaced apart from each other, that is, there are non-candidate fusion region regions between the candidate fusion regions, and may also be understood as the candidate fusion regions. The sum of the area of the area is smaller than the area of the overlapping area. Fig.
  • FIG. 9 is a view showing a comparison effect of the fusion effect between the embodiment of the present invention and the prior art.
  • FIG. 9 shows a sub-image corresponding to the overlap region obtained by the weighting method in the prior art
  • FIG. 9 shows a candidate of the embodiment of the present invention.
  • the method for merging the merged images of the candidate fusion regions according to the embodiment of the present invention can effectively reduce the blur or ghost caused by the parallax.
  • the candidate fusion region adopts the interval arrangement method, the calculation of the optical flow vector mapping error and the calculation amount when the image is fused can be reduced, and the efficiency and processing speed of the composite image can be improved.
  • FIGS. 1 through 9 A specific embodiment of a method for merging images according to an embodiment of the present invention is described in detail above with reference to FIGS. 1 through 9.
  • an apparatus for merging images according to an embodiment of the present invention will be described in detail with reference to FIGS. 10 and 11.
  • FIG. 10 shows a schematic diagram of an apparatus 1000 for merging images in accordance with an embodiment of the present invention. It should be understood that the following and other operations and/or functions of the various modules in the apparatus 1000 of the embodiments of the present invention are respectively implemented in order to implement FIG. For the sake of brevity, the corresponding process of each method in FIG. 9 is not described here.
  • the device 1000 includes:
  • the obtaining module 1010 is configured to acquire two images to be merged, the two images being images respectively collected from two viewpoints, the two images having overlapping regions;
  • the determining module 1020 is configured to divide the overlapping area into at least two candidate fusion areas, and each of the at least two candidate fusion areas has two images of the two images divided into two disconnected a portion of the line; determining, according to the optical flow vector between the two images, an optical flow vector mapping error corresponding to each candidate fusion region, wherein the optical flow vector mapping error corresponding to each candidate fusion region is used to indicate The sub-images of the two images in each candidate fusion region are respectively based on the optical flow vector, corresponding to errors between images at the same viewpoint;
  • the selecting module 1030 is configured to select, according to the optical flow vector mapping error corresponding to each of the at least two candidate fusion regions, the target fusion region of the two images from the at least two candidate fusion regions;
  • the fusion module 1040 is configured to merge the two images in the target fusion region selected by the selection module to obtain a merged image of the two images.
  • the overlapping area of the image to be merged is divided into a plurality of fused regions, and by determining the optical flow vector mapping error corresponding to each of the plurality of candidate fused regions, the sub-images of the merged image in the overlapping region can be treated.
  • the error of the optical flow vector map (or optical flow field map) is estimated, so that the target fusion region can be selected from the candidate fusion regions according to the error estimation result, and the two images are fused in the target fusion region, thereby obtaining two images.
  • Combining the images not only reduces blur or ghosting of the merged image due to parallax, but also saves computational effort for image merging.
  • the merging module 1040 is configured to: obtain, according to the optical flow vector, a mapping image of two overlapping sub-images corresponding to intermediate views of the two views, where the coordinate value of the intermediate view is An average of the coordinate values of the two viewpoints, wherein the two overlapping sub-images are overlapping sub-images of the two images in the overlapping region; and the two mapping images are fused in the target fusion region to obtain a fusion An image; splicing the sub-images other than the two overlapping sub-images and the fused sub-image in the two images to obtain the merged image.
  • the two images include a first image and a second image, where the two images include a first image and a second image, where the first image corresponds to a first viewpoint, and the second image corresponds to a second image a second viewing point
  • the determining module is configured to: obtain an image corresponding to the overlapping sub-image in the overlapping area corresponding to the second viewing point in the first image based on the optical flow vector; and determine the image corresponding to the second viewing point a first occlusion image between the overlapping sub-images of the overlapping region in the second image; based on the optical flow vector, acquiring the first occlusion image corresponding to the intermediate viewpoint of the two viewpoints a second occlusion image; determining an optical flow vector mapping error corresponding to each candidate fusion region according to information in the each candidate fusion region in the second occlusion image for indicating an occlusion region.
  • the area for indicating occlusion includes a pixel point for indicating occlusion
  • the determining module 1020 is specifically configured to: determine that each candidate fused area corresponds to an area in the second occlusion image.
  • the sum of the pixel values of the pixel points indicating the occlusion; the sum of the pixel values is used as the optical flow vector mapping error corresponding to each candidate fusion region.
  • the selecting module 1030 is specifically configured to: determine a candidate fusion region that minimizes a corresponding optical flow vector mapping error in the at least two candidate fusion regions as a target fusion region of the two images.
  • the selecting module 1030 is further configured to determine, as the target fusion of the two images, the candidate fusion regions in which the corresponding optical flow vector mapping errors in the at least two candidate fusion regions are less than a preset threshold. region.
  • a sum of areas of the at least two candidate fusion regions is smaller than an area of the overlapping region.
  • the apparatus 1100 includes a processor 1110, a memory 1120, and a bus system 1130, wherein the processor 1100 and the memory 1120 Connected by the bus system 1130, the memory 1120 is used to store instructions, and the processor 1110 is configured to execute instructions stored by the memory 1120.
  • the processor 1110 is configured to: acquire two images to be merged, the two images are images respectively collected from two viewpoints, the two images have overlapping regions; and the overlapping regions are divided into at least two candidates. a fusion region, wherein each of the at least two candidate fusion regions has a line dividing the two images of the two images into two parts that are not connected; based on the optical flow vector between the two images, Determining an optical flow vector mapping error corresponding to each of the candidate fusion regions, wherein the optical flow vector mapping error corresponding to each candidate fusion region is used to indicate respective sub-images of the two images in each candidate fusion region An image, corresponding to an error between images at the same viewpoint based on the optical flow vector, respectively; and selecting the two from the at least two candidate fusion regions according to respective optical flow vector mapping errors of the at least two candidate fusion regions Target fusion regions of the images; the two images are fused in the target fusion region to obtain a merged image of the two images.
  • the overlapping area of the image to be merged is divided into a plurality of fused regions, and by determining the optical flow vector mapping error corresponding to each of the plurality of candidate fused regions, the sub-images of the merged image in the overlapping region can be treated.
  • Estimation of the error of optical flow vector mapping (or optical flow field mapping) Therefore, the target fusion region can be selected from the candidate fusion regions according to the error estimation result, and the two images are merged in the target fusion region, thereby obtaining a combined image of the two images, thereby not only reducing the blurring caused by the parallax of the merged image or ghosting, and can save the amount of calculation of image merging.
  • the processor 1110 may be a central processing unit (“CPU"), and the processor 1110 may also be other general-purpose processors, digital signal processors (DSPs). , an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, and the like.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 1120 can include read only memory and random access memory and provides instructions and data to the processor 1110. A portion of the memory 1120 can also include a non-volatile random access memory. For example, the memory 1120 can also store information of the device type.
  • the bus system 1130 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus.
  • the bus system 1130 can also include an internal bus, a system bus, and an external bus. However, for clarity of description, various buses are labeled as bus system 1130 in the figure.
  • each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 1110 or an instruction in a form of software.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware processor, or may be performed by a combination of hardware and software modules in the processor.
  • the software module may correspond to a storage medium mature in the field of random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium corresponds to the memory 1120, and the processor 1110 reads the information in the memory 1120 and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • the processor 1110 is configured to: obtain, according to the optical flow vector, a mapping image of two overlapping sub-images corresponding to intermediate views of the two views, where the coordinate value of the intermediate view is An average of the coordinate values of the two viewpoints, wherein the two overlapping sub-images are overlapping sub-images of the two images in the overlapping region; and the two mapping images are fused in the target fusion region to obtain a fusion An image; splicing the sub-images other than the two overlapping sub-images and the fused sub-image in the two images to obtain the merged image.
  • the two images include a first image and a second image, where the first image corresponds to a first view, and the second image corresponds to a second view
  • the processor 1110 is specifically configured to: An optical flow vector obtained by acquiring an overlapping sub-image in the overlapping area in the first image An image of the second viewpoint; determining the first occlusion image corresponding to the image of the second viewpoint and the overlapping sub-image of the overlapping region in the second image; acquiring the first occlusion based on the optical flow vector And corresponding to the second occlusion image of the intermediate view point of the two view points; determining, according to the information indicating the occlusion area in each of the candidate fused regions in the second occlusion image, the corresponding candidate fusion region corresponding to Optical flow vector mapping error.
  • the area for indicating occlusion includes a pixel point for indicating occlusion
  • the processor 1110 is specifically configured to: determine that each candidate fusion area corresponds to an area in the second occlusion image. The sum of the pixel values of the pixel points indicating the occlusion; the sum of the pixel values is used as the optical flow vector mapping error corresponding to each candidate fusion region.
  • the processor 1110 is specifically configured to: determine a candidate fusion region that minimizes a corresponding optical flow vector mapping error in the at least two candidate fusion regions as a target fusion region of the two images.
  • the processor 1110 is specifically configured to: determine, as the target fusion of the two images, the candidate fusion region in which the corresponding optical flow vector mapping error of the at least two candidate fusion regions is less than a preset threshold. region.
  • a sum of areas of the at least two candidate fusion regions is smaller than an area of the overlapping region.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrated into another system System, or some features can be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, or an electrical, mechanical or other form of connection.
  • the units described as separate components may or may not be physically separated, and the components displayed as the units may or may not be physical units, that is, may correspond to one place or may be distributed to a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present invention.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, can be stored in a computer readable storage medium.
  • the technical solution of the present invention contributes in essence or to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the method of various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例公开了一种合并图像的方法和装置,该方法包括:获取待合并的两张图像,该两张图像具有重叠区域,且该重叠区域被划分出至少两个候选融合区域;基于该两张图像之间的光流矢量,确定该至少两个候选融合区域中的每个候选融合区域对应的光流矢量映射误差;根据该至少两个候选融合区域各自对应的光流矢量映射误差,从该至少两个候选融合区域中选取该两个图像的目标融合区域;将该两张图像在该目标融合区域进行融合,从而得到该两张图像的合并图像。本发明实施例的合并图像的方法和装置能够减少由视差产生的模糊或重影。

Description

合并图像的方法和装置
本申请要求于2015年11月26日提交中国专利局、申请号为201510845405.X、发明名称为“合并图像的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像处理领域,尤其涉及一种合并图像的方法和装置。
背景技术
随着电子信息产业与社会需求的发展,各种图像采集设备和大尺寸显示器迅速进入到人们的日常生活。随之而来的问题是如何获取宽视角、高分辨率的图像或视频数据。例如,日常生活中的生活图像的全景拍摄、卫星图像及航拍图像的合并等,都需要把从不同视点拍摄的图像无缝拼接起来,合并成一幅宽视野、高分辨率的全景拼接图像。现有技术中一般使用普通相机阵列采集一组具有重叠区域的低分辨率或小视角图像,然后将来自多个视点的图像进行合并,经过拼接和融合组合成一幅高分辨率、宽视角的新图像,经过合并的图像包括拼接前图像的全部信息,并且看起来像在一个视点拍摄而成。
现有技术中经常采用加权平均值法对图像进行合并。这种方式虽然简单,却没有考虑不同视点采集到的图像的视差问题,其直接结果就是在合并前两张图像重叠区域内的物体会在合并图像中出现模糊或重影。
发明内容
本发明实施例提供了一种合并图像的方法和装置,以减少合并图像由于视差产生的模糊或重影。
第一方面,本发明实施例提供了一种合并图像的方法,该方法包括:获取待合并的两张图像,该两张图像是分别从两个视点采集到的图像,该两张图像具有重叠区域;将该重叠区域划分出至少两个候选融合区域,该至少两个候选融合区域中的每个候选融合区域中存在将该两张图像的每张图像划分为不连通的两部分的线;基于该两张图像之间的光流矢量,确定该每个候 选融合区域对应的光流矢量映射误差,其中,该每个候选融合区域对应的光流矢量映射误差用于指示在该每个候选融合区域中的,该两张图像各自的子图像,分别基于该光流矢量,对应在同一视点的图像之间的误差;根据该至少两个候选融合区域各自对应的光流矢量映射误差,从该至少两个候选融合区域中选取出该两个图像的目标融合区域;将该两张图像在该目标融合区域进行融合,从而得到该两张图像的合并图像。
待合并的图像的重叠区域被划分成多个融合区域,通过确定该多个候选融合区域各自对应的光流矢量映射误差,能够对待合并图像在重叠区域内的子图像的光流矢量映射(或称光流场映射)误差进行估计,从而能够根据误差估计结果从候选融合区域中选择目标融合区域,将两张图像在目标融合区域进行融合,从而得到两张图像的合并图像,这样不但能够减少合并图像由于视差引起的模糊或重影,且能够节省图像合并的计算量。
结合第一方面,在第一方面的第一种可能的实现方式中,该方法还包括:该将该两张图像在该目标融合区域进行融合,从而得到该两张图像的合并图像,包括:基于该光流矢量,获取两张重叠子图像分别对应在该两个视点的中间视点的映射图像,该中间视点的坐标值为该两个视点的坐标值的均值,该两张重叠子图像为在该重叠区域中的该两张图像各自的重叠子图像;将两张映射图像在该目标融合区域进行融合,从而得到融合子图像;将该两张图像中的除该两张重叠子图像之外的子图像和该融合子图像进行拼接,从而得到该合并图像。
通过获取两张重叠子图像基于光流矢量对应在中间视点的映射图像,并将得到的映射图像在目标融合区域进行融合,从而得到基于光流矢量变换融合的融合子图像,最终获得合并图像,可以有效的减少合并图像由于视差引起的模糊或重影。
结合第一方面或第一方面的第一种可能的实现方式,在第一方面的第二种可能的实现方式中,该方法还包括:该两张图像包括第一图像和第二图像,该第一图像对应第一视点,该第二图像对应第二视点,该基于该两张图像之间的光流矢量,确定该至少两个候选融合区域中的每个候选融合区域对应的光流矢量映射误差,包括:基于该光流矢量,获取该第一图像中的在该重叠区域的重叠子图像对应在该第二视点的图像;确定该对应在该第二视点的图像与该第二图像中的在该重叠区域的重叠子图像之间的第一遮挡图像;基于 该光流矢量,获取该第一遮挡图像对应在该两个视点的中间视点的第二遮挡图像;根据在该第二遮挡图像中的该每个候选融合区域中的用于指示遮挡的区域的信息,确定该每个候选融合区域对应的光流矢量映射误差。
首先获取第一图像在重叠区域的重叠子图像基于光流矢量变换对应在第二视点的图像,并获取该对应在第二视点的图像与第二图像在重叠区域的重叠子图像之间的第一遮挡图像,然后根据该第一遮挡图像获取对应在中间视点的第二遮挡图像,根据第二遮挡图像的用于指示遮挡区域的信息,确定每个候选融合区域对应的光流矢量映射误差,从而可以根据光流矢量映射误差选取目标融合区域。
结合第一方面的第二种可能的实现方式,在第一方面的第三种可能的实现方式中,该方法还包括:该用于指示遮挡的区域包括用于指示遮挡的像素点,该根据该每个候选融合区域对应在该第二遮挡图像中的区域中的用于指示遮挡的区域的分布,确定该每个候选融合区域对应的光流矢量映射误差,包括:确定该每个候选融合区域对应在该第二遮挡图像中的区域中的用于指示遮挡的像素点的像素值之和;将该像素值之和作为该每个候选融合区域对应的光流矢量映射误差。
结合第一方面、第一方面的第一种至第三种可能的实现方式中的任一种可能的实现方式,在第一方面的第四种可能的实现方式中,该方法还包括:该根据该至少两个候选融合区域对应的光流矢量映射误差,从该至少两个候选融合区域中选取该两个图像的融合区域,包括:将该至少两个候选融合区域中对应的光流矢量映射误差最小的候选融合区域确定为该两个图像的目标融合区域。
通过选取光流矢量映射误差最小的候选融合区域为该两个图像的目标融合区域,并最终在目标融合区域对两张待合并图像进行融合,提高了合并图像的融合质量,减少了由于视差引起的模糊和重影。
结合第一方面、第一方面的第一种至第三种可能的实现方式中的任一种可能的实现方式,在第一方面的第五种可能的实现方式中,该方法还包括:该根据该至少两个候选融合区域对应的该光流矢量映射误差,从该至少两个候选融合区域中选取该两个图像的融合区域,包括:将该至少两个候选融合区域中对应的光流矢量映射误差小于预设阈值的候选融合区域确定为该两个图像的目标融合区域。
通过选取光流矢量映射误差小于预设阈值的候选融合区域为该两个图像的目标融合区域,并最终在目标融合区域对两张待合并图像进行融合,提高了合并图像的融合质量,减少了由于视差引起的模糊和重影。
结合第一方面、第一方面的第一种至第五种可能的实现方式中的任一种可能的实现方式,在第一方面的第六种可能的实现方式中,该方法还包括:该至少两个候选融合区域的面积之和小于该重叠区域的面积。
由于该至少两个候选融合区域的面积之和小于重叠区域的面积,所以可以减少计算光流矢量映射误差和融合图像时的计算量,提高合成图像的效率和处理速度。
在第二方面,本发明实施例提供了一种合并图像的装置,该装置包括:获取模块,用于获取待合并的两张图像,该两张图像是分别从两个视点采集到的图像,该两张图像具有重叠区域;确定模块,用于将该重叠区域划分出至少两个候选融合区域,该至少两个候选融合区域中的每个候选融合区域中存在将该两张图像的每张图像划分为不连通的两部分的线;基于该两张图像之间的光流矢量,确定该每个候选融合区域对应的光流矢量映射误差,其中,该每个候选融合区域对应的光流矢量映射误差用于指示在该每个候选融合区域中的,该两张图像各自的子图像,分别基于该光流矢量,对应在同一视点的图像之间的误差;选择模块,用于根据该至少两个候选融合区域各自对应的光流矢量映射误差,从该至少两个候选融合区域中选取出该两个图像的目标融合区域;融合模块,用于将该两张图像在该选择模块选取的该目标融合区域进行融合,从而得到该两张图像的合并图像。
待合并的图像的重叠区域被划分成多个融合区域,通过确定该多个候选融合区域各自对应的光流矢量映射误差,能够对待合并图像在重叠区域内的子图像的光流矢量映射(或称光流场映射)误差进行估计,从而能够根据误差估计结果从候选融合区域中选择目标融合区域,将两张图像在目标融合区域进行融合,从而得到两张图像的合并图像,这样不但能够减少合并图像由于视差引起的模糊或重影,且能够节省图像合并的计算量。
结合第二方面,在第二方面的第一种可能的实现方式中,该融合模块具体用于:基于该光流矢量,获取两张重叠子图像分别对应在该两个视点的中间视点的映射图像,该中间视点的坐标值为该两个视点的坐标值的均值,该两张重叠子图像为在该重叠区域中的该两张图像各自的重叠子图像;将两张 映射图像在该目标融合区域进行融合,从而得到融合子图像;将该两张图像中的除该两张重叠子图像之外的子图像和该融合子图像进行拼接,从而得到该合并图像。
结合第二方面或第二方面的第一种可能的实现方式,在第二方面的第二种可能的实现方式中,该两张图像包括第一图像和第二图像,该第一图像对应第一视点,该第二图像对应第二视点,该确定模块具体用于:基于该光流矢量,获取该第一图像中的在该重叠区域的重叠子图像对应在该第二视点的图像;确定该对应在该第二视点的图像与该第二图像中的在该重叠区域的重叠子图像之间的第一遮挡图像;基于该光流矢量,获取该第一遮挡图像对应在该两个视点的中间视点的第二遮挡图像;根据在该第二遮挡图像中的该每个候选融合区域中的用于指示遮挡的区域的信息,确定该每个候选融合区域对应的光流矢量映射误差。
结合第二方面的第二种可能的实现方式,在第二方面的第三种可能的实现方式中,该用于指示遮挡的区域包括用于指示遮挡的像素点,该确定模块具体用于:确定该每个候选融合区域对应在该第二遮挡图像中的区域中的用于指示遮挡的像素点的像素值之和;将该像素值之和作为该每个候选融合区域对应的光流矢量映射误差。
结合第二方面、第二方面的第一种至第三种可能的实现方式中的任一种可能的实现方式,在第二方面的第四种可能的实现方式中,该选择模块具体用于:将该至少两个候选融合区域中对应的光流矢量映射误差最小的候选融合区域确定为该两个图像的目标融合区域。
结合第二方面、第二方面的第一种至第三种可能的实现方式中的任一种可能的实现方式,在第二方面的第五种可能的实现方式中,该选择模块具体用于:将该至少两个候选融合区域中对应的光流矢量映射误差小于预设阈值的候选融合区域确定为该两个图像的目标融合区域。
结合第二方面、第二方面的第一种至第五种可能的实现方式中的任一种可能的实现方式,在第二方面的第六种可能的实现方式中,该至少两个候选融合区域的面积之和小于该重叠区域的面积。
第三方面,提供一种合并图像的装置,该装置包括处理器和存储器;该存储器用于存储代码;该处理器通过读取该存储器中存储的该代码,以用于执行第一方面提供的方法。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据本发明实施例的一种合并图像的方法的示意性流程图。
图2是根据本发明又一实施例的与现有技术之间的融合效果对比图。
图3是根据本发明另一实施例的待合并的两张图像的展示图。
图4是根据本发明另一实施例的求第一遮挡图像的过程示意图。
图5是根据本发明另一实施例的合并图像的方法的最终合并图像的效果图。
图6是根据本发明又一实施例的由前景物体产生的遮挡图像的示意图。
图7是根据本发明又一实施例的重叠区域的候选融合区域的划分方法示意图。
图8是根据本发明再一实施例的重叠区域的候选融合区域的划分方法示意图。
图9是根据本发明再一实施例的与现有技术之间的融合效果对比图。
图10是根据本发明实施例的合并图像的装置的示意性框图。
图11是根据本发明实施例的合并图形的装置的示意图。
具体实施方式
为了描述的方便和简洁,本发明实施例中的方法以两张图像的合并过程为例,显而易见,本发明实施例中的方法还可以应用于多张图像合并的图像融合系统,也可以应用于视频领域的图像融合系统。
应理解,一般情况下,待合并的图像可以由相机阵列采集,也可以由其他图像采集设备获取,例如,本发明实施例中的待合并的两张图像可以是相机阵列中的相邻两个相机在同一场景中采集到的两张图像。待合并的图像可以从不同的视点,即不同的角度采集而得。由于这些图像从不同的视点采集而成,所以合并图像时,在图像的重叠区域会由于视差产生模糊或重影。视差是指从两个视点上观察同一个目标所产生的方向差 异。合并图像的过程中应该尽量减少视差。另外,在合并图像的过程中,拼缝附近的物体可能出现重复或缺失,这种现象又称为鬼影。尤其当重叠区域对应的子图像的场景中存在运动物体或前景物体时,在图像融合过程中可能会产生模糊或重影。在合并图像的过程中应该减少或避免由视差产生的模糊或重影。
图1示出了本发明实施例的合并图像的方法100的示意性流程图。如图1所示,该方法100包括:
S110,获取待合并的两张图像,该两张图像是分别从两个视点采集到的图像,该两张图像具有重叠区域;
S120,将该重叠区域划分出至少两个候选融合区域,该至少两个候选融合区域中的每个候选融合区域中存在将该两张图像的每张图像划分为不连通的两部分的线;基于该两张图像之间的光流矢量,确定该每个候选融合区域对应的光流矢量映射误差,其中,该每个候选融合区域对应的光流矢量映射误差用于指示在该每个候选融合区域中的,该两张图像各自的子图像,分别基于该光流矢量,对应在同一视点的图像之间的误差;
S130,根据该至少两个候选融合区域各自对应的光流矢量映射误差,从该至少两个候选融合区域中选取出该两个图像的目标融合区域;
S140,将该两张图像在该目标融合区域进行融合,从而得到该两张图像的合并图像。
本发明实施例中,待合并的图像的重叠区域被划分成多个融合区域,通过确定该多个候选融合区域各自对应的光流矢量映射误差,能够对待合并图像在重叠区域内的子图像的光流矢量映射(或称光流场映射)的误差进行估计,从而能够根据误差估计结果从候选融合区域中选择目标融合区域,将两张图像在目标融合区域进行融合,从而得到两张图像的合并图像,这样不但能够减少合并图像由于视差引起的模糊或重影,且能够节省图像合并的计算量。
应理解,上述每个候选融合区域对应的光流矢量映射误差可以包括两张图像中的其中一张图像对应在该每个候选融合区域的子图像基于光流矢量,映射至两张图像中的另一张图像对应的视点的光流矢量映射误差。例如,两张图像可以包括第一图像和第二图像,可以基于光流矢量将第一图像对应在每个候选融合区域的子图像映射至第二图像对应的视点,并将第一图像映射 后的子图像与第二图像的对应的子图像之间的误差作为每个候选融合区域的光流矢量映射误差。相似的,也可以基于光流矢量,将第二图像对应在每个候选融合区域的子图像映射至第一图像对应的视点,并将第二图像映射后的子图像与第一图像的对应的子图像之间的误差作为每个候选融合区域的光流矢量映射误差。每个候选融合区域可以考虑两个光流矢量映射误差中的其中一个光流矢量映射误差,从该至少两个候选融合区域中选取目标融合区域,也可以综合考虑两个光流矢量映射误差选取目标融合区域其中一张图像的光流矢量映射误差从该至少两个候选融合区域中选取最终的融合区域,本发明实施例对此不作具体限定。
应理解,该重叠区域被划分出至少两个候选融合区域,候选融合区域的数量取值可以为2-7个。上述两个图像可以各自对应一个视点,上述同一视点可以指待合并的两张图像各自的视点,也可以指待合并的两张图像各自对应的视点的中间的视点(或者说,中间视点)。应理解,上述两张图像之间的光流矢量可以包括两张图像中的每一张图像变换到另一张图像对应的视点的光流矢量。例如,待合并的两张图像可以包括第一图像和第二图像,第一图像和第二图像之间的光流矢量可以包括第一图像变换到第二图像对应的视点所采用的光流矢量,还可以包括第二图像变换到第一图像对应的视点所采用的光流矢量。
应理解,候选融合区域的光流矢量映射误差可以指示待合并图像的对应在候选融合区域的子图像经过光流矢量变换得到的变换结果的准确程度,变换结果的准确度越高,候选融合区域对应的光流矢量映射误差就越小。候选融合区域对应的光流矢量映射误差越小,则可以说明在该候选融合区域将两个图像进行融合产生的模糊或重影就越少,最终合并的图像的质量就越高。
可选地,当确定多个候选融合区域对应的光流矢量映射误差之后,可以根据该至少两个候选融合区域对应的光流矢量映射误差的大小,从至少两个候选融合区域中选取两个图像的目标融合区域,即选取两个待合并的图像进行融合的区域。通过对比不同候选融合区域对应的光流矢量映射误差的大小,可选取目标融合区域,以在保证融合质量的同时减小融合区域的大小,从而减少融合图像时的计算量,提高融合图像的速度。可选地,在选择目标融合区域时,可以选择至少两个候选融合区域中对应的光流矢量映射误差最小的候选融合区域为两个待合并图像的目标融合区域。例如,当两张图像包括第 一图像和第二图像时,可以选择至少两个候选融合区域中的与第一图像的子图像进行光流矢量变换后对应的光流矢量映射误差最小的候选融合区域为融合区域,也可以选择至少两个候选融合区域中的与第二图像的子图像进行光流矢量变换后对应的光流矢量映射误差最小的候选融合区域为融合区域。应理解,一般情况下,同一个候选融合区域对应的两个光流矢量映射误差的量级是相近的,而不同的候选融合区域对应的两个光流矢量映射误差的量级可以是差别较大的。所以,在选取光流矢量映射误差最小的候选融合区域时,可以只考虑候选融合区域对应的其中一个光流矢量映射误差的大小,也可以同时考虑候选融合区域对应的两个光流矢量映射误差的大小。
在本发明实施例中,通过选取光流矢量映射误差最小的候选融合区域为该两个图像的目标融合区域,并最终在目标融合区域对两张待合并图像进行融合,提高了合并图像的融合质量,减少了由于视差引起的模糊和重影。
可选地,在选择融合区域时,可以从该至少两个候选融合区域中选取光流矢量映射误差小于预设阈值的候选融合区域为目标融合区域,然后从目标候选融合区域中确定待合并的两个图像的融合区域。例如,当待合并的两张图像包括第一图像和第二图像时,选择目标融合区域时,可以选取至少两个候选融合区域中的与第一图像的子图像进行光流矢量变换后对应的光流矢量映射误差小于预设阈值的候选融合区域为目标候选融合区域,也可以选取至少两个候选融合区域中的与第一图像的子图像进行光流矢量变换后对应的光流矢量映射误差小于预设阈值的候选融合区域为目标融合区域,还可以选取至少两个候选融合区域中的与两张图像的子图像进行光流矢量变换后对应的两个光流矢量映射误差都小于预设阈值的候选融合区域为目标候选融合区域,即选取的目标候选融合区域对应的两个光流矢量映射误差都小于预设阈值。当小于预设阈值的候选融合区域不止一个时,可以从符合条件的候选融合区域中确定目标融合区域。当符合条件的候选融合区域只有一个时,将符合条件的候选融合区域为融合区域。可选地,预设阈值可以根据最终合并的图像的质量要求而设定,也可以根据经验来设定,本发明实施例并不限定于此。
在本发明实施例中,通过选取光流矢量映射误差小于预设阈值的候选融合区域为该两个图像的目标融合区域,并最终在目标融合区域对两张待合并图像进行融合,提高了合并图像的融合质量,减少了由于视差引起的模糊和 重影。
例如,作为一个具体实施例,当至少两个候选融合区域为三个候选融合区域时,可以首先计算中间的候选融合区域对应的光流矢量映射误差,由于一般情况下,根据经验,可认为两侧的候选融合区域的光流矢量映射误差小于中间的候选融合区域误差。当确定中间的候选融合区域对应的光流矢量映射误差后,可认为两侧的候选融合区域的光流矢量映射误差小于中间的候选融合区域的光流矢量映射误差。例如,当待合并的图像包括第一图像和第二图像时,可以设定M为第一图像在中间候选融合区域内的遮挡像素点的像素灰度值之和,可以设定N为第二图像在中间候选融合区域内的遮挡像素点的像素灰度值之和。可以将M和N确认为中间候选融合区域的光流矢量映射误差,并将M和N与预设阈值进行比较。当M和N都小于预设阈值时,可以确认中间候选融合区域为目标融合区域。因为在对图像进行融合时,最关心的是中间融合区域的融合效果,所以如果中间候选融合区域的光流矢量映射误差满足对合并图像质量的要求时,无需计算其他候选融合区域的光流矢量映射误差,从而减少计算光流矢量映射误差的工作量。若M与N中有任一项大于预设阈值时,不妨假设N>M>预设阈值,则需要选取两侧的候选融合区域中的其中一个候选融合区域作为目标融合区域。具体地,可以认为第二图像内的中间候选融合区域对应的子图像的光流矢量估计不准确,导致第二图像内的与中间候选融合区域对应的子图像通过光流矢量映射到第一图像视点的误差较大,所以需要尽量减少第二图像在融合结果中所占的比例。因此,可以选择靠近第二图像的候选融合区域为目标融合区域,最终的合并图像中,在目标融合区域中第一图像和第二图像所占的比重各为50%。而对于靠近第一图像的候选融合区域以及中间候选融合区域,采用第一图像的对应子图像经过光流矢量变换到中间视点后的第二映射子图像,第一图像所占的比重为100%。从而减少最终合并图像由于视差产生的模糊或重影,提高合并图像的处理速度。
可选地,在确定目标融合区域后,将该两张图像在该目标融合区域进行融合,从而得到该两张图像的合并图像。
可选地,作为一个实施例,在本发明实施例的合并图像的方法100中,该将该两张图像在该目标融合区域进行融合,从而得到该两张图像的合并图像,包括:基于该光流矢量,获取两张重叠子图像分别对应在该两个视点的 中间视点的映射图像,该中间视点的坐标值为该两个视点的坐标值的均值,该两张重叠子图像为在该重叠区域中的该两张图像各自的重叠子图像;将两张映射图像在该目标融合区域进行融合,从而得到融合子图像;将该两张图像中的除该两张重叠子图像之外的子图像和该融合子图像进行拼接,从而得到该合并图像。
在本发明实施例中,通过获取两张重叠子图像基于光流矢量对应在中间视点的映射图像,并将得到的映射图像在目标融合区域进行融合,从而得到基于光流矢量变换融合的融合子图像,最终获得合并图像,可以有效的减少合并图像由于视差引起的模糊或重影。
例如,待合并的两张图像可以包括第一图像和第二图像,可以根据第一图像和第二图像之间的光流矢量,将第一图像中的对应在融合区域的子图像变换至中间视点,将第二图像中的对应在融合区域的子图像变换至中间视点,并将变换到中间视点的两个子图像进行融合处理。其中,对两个子图像进行融合处理可以使用加权平均值法,又称为羽化方法。还可以使用配置中值滤波器的加权平均值法,即配置中值滤波器的羽化方法。可选地,在获取融合子图像之后,可以根据待合并的两张图像以及融合子图像,对待合并的图像进行合并处理,得到最终的合并图像。例如,待合并的图像包括第一图像和第二图像,以融合区域为界,对于重叠区域内的除融合区域之外靠近第一图像的第一区域,确定第一图像的对应在该第一区域内的子图像基于光流矢量变换,映射到中间视点后得到的第一映射子图像,对于重叠区域内的除融合区域之外靠近第二图像的第二区域,确定第二图像的对应在该第二区域内的子图像经过光流矢量变换、映射到中间视点后得到的第二映射子图像。然后将第一图像内除重叠区域之外的区域对应的子图像、第二图像内除重叠区域之外的区域对应的子图像、融合子图像、第一映射子图像以及第二映射子图像拼接在一起,得到最终合并的合并图像。
又例如,待合并的图像可以包括第一图像和第二图像,可以设定第一图像的第一映射子图像的像素点的坐标为(i,j),该坐标距离第一图像的边缘为λ1像素,距离第二图像的边缘为λ2像素,第一图像内的对应在重叠区域的子图像映射到第一映射子图像对应的光流矢量为F(Fi,Fj),可以采用反向投影的方式,根据公式(1)确定将第一图像映射到中间视点的第三映射子图像的像素点的坐标。获取第二图像映射到中间视点得到的第四映射子图像的 像素点的方法与之相应,为了描述的简洁和方便,此处不再赘述。
公式(1)为:
Figure PCTCN2016095880-appb-000001
其中(Bi,Bj)表示第三映射子图像的像素点位置的坐标,也是融合子图像的像素点对应的位置的坐标。可以设定第一图像内的与融合子图像匹配的像素值为c1,第二图像内的与融合子图像匹配的像素值c2,则可以根据公式(2),采用加权平均值法计算融合子图像的像素值。
公式(2)为:
Figure PCTCN2016095880-appb-000002
图2示出了本发明实施例与现有技术之间的融合效果对比图。如图2所示,图2中的(a)出了采用加权平均值法得到的对应在重叠区域的子图像,图2中的(b)示出了采用本发明实施例的合并图像的方法得到的对应在重叠区域的子图像。从图2可以看出,与现有技术相比,本发明实施例的合并图像的方法可以有效减少由于视差产生的模糊或重影。
可选地,在本发明实施例中,将该两张映射图像在该目标融合区域进行融合,从而得到融合子图像的具体实现方法也可以采用配合中值滤波法的改进羽化方法来实现对第三映射子图像以及第四映射子图像之间的融合,中值滤波法主要利用中值滤波器处理重叠区域像素。将中值滤波器作用在边界附近的区域,当某个像素值与周围像素值的灰度值差别较大的时候,对这个像素点进行中值滤波,从而使它的值接近周围像素的值,从而能够消除光强的不连续问题。中值滤波法在场景存在运动目标的情景下,能够突出运动目标,保持原有背景。所以,本发明实施例中的配合中值滤波法的改进羽化方法能够消除模糊或重影,达到较好的图像合并效果。
作为一个实施例,本发明实施例的合并图像的方法100还包括:该两张图像包括第一图像和第二图像,该第一图像对应第一视点,该第二图像对应第二视点,该基于该两张图像之间的光流矢量,确定该至少两个候选融合区域中的每个候选融合区域对应的光流矢量映射误差,包括:基于该光流矢量,获取该第一图像中的在该重叠区域的重叠子图像对应在该第二视点的图像;确定该对应在该第二视点的图像与该第二图像中的在该重叠区域的重叠子图像之间的第一遮挡图像;基于该光流矢量,获取该第一遮挡图像对应在该 两个视点的中间视点的第二遮挡图像;根据在该第二遮挡图像中的该每个候选融合区域中的用于指示遮挡的区域的信息,确定该每个候选融合区域对应的光流矢量映射误差。
在本发明实施例中,首先获取第一图像在重叠区域的重叠子图像基于光流矢量变换对应在第二视点的图像,并获取该对应在第二视点的图像与第二图像在重叠区域的重叠子图像之间的第一遮挡图像,然后根据该第一遮挡图像获取对应在中间视点的第二遮挡图像,根据第二遮挡图像的用于指示遮挡区域的信息,确定每个候选融合区域对应的光流矢量映射误差,从而可以根据光流矢量映射误差选取目标融合区域。
应理解,上述第二遮挡图像在每个候选融合区域内的用于指示遮挡的区域的分布可以通过该第二遮挡图内的像素点的像素值表征,具体地,可以通过第二遮挡图像内的遮挡像素点与非遮挡像素点的分布来表征,例如,遮挡像素点越多,表示第二遮挡图像的遮挡区域越多,说明该每个候选融合区域对应的光流矢量映射误差越大。
还应理解,由于最后在合并图像过程中,需要把两张图像内的与融合区域对应的两张子图像通过光流矢量映射到中间视点,再进行融合。所以,与第一遮挡图像相比,根据第二遮挡图像确定的光流矢量映射误差可以更准确的指示光流矢量变换的准确程度,进而指示待合并的图像在候选融合区域内融合后得到的合并图像的质量好坏程度。
可选地,本发明实施例对于确定上述第一遮挡图像的方法不做限定,例如,可以采用最优化算法来确定上述第一遮挡图像,本发明实施例对采用的最优化算法不作具体限定。例如,可以采用根据图像分割(Graph-cut)法求能量函数最优结果的方法确定第一遮挡图像,也可以采用其他确定遮挡图的方法。
例如,可以采用根据图像分割(Graph-cut)法求能量函数最优结果的方法确定第一遮挡图像。图3示出了待合并的两张图像。如图3所示,待合并的图像可以为对应在左侧的第一图像和对应在右侧的第二图像。其中,可以定义IL(x,y)表示第一图像内的对应在重叠区域的子图像,定义IR(x,y)表示第二图像内的对应在重叠区域的子图像。其中,X=(x,y)可以表示IL(x,y)或IR(x,y)中的像素点。W=(u,v)用于表示从IL(x,y)映射到IR(x,y)对应视点采用的光流矢量,f(X)可以表示为X点的标签值。对IL(x,y)经过光流矢量变换映射 到IR(x,y)的视点得到的第一映射子图可以表示为ILf(x,y)。可以根据公式(3)、公式(4)和公式(5),采用Graph-cut法求解能量函数E[f(x)]的最优结果,以得到ILf(x,y)与IR(x,y)的第一遮挡图像,可以将该第一遮挡图像表示为△I ILf(x,y)。
其中,公式(3)、公式(4)和公式(5)分别为:
E[f(x)]=Edata[f(x)]+Esmooth[f(x)](3)
Figure PCTCN2016095880-appb-000003
Figure PCTCN2016095880-appb-000004
其中,Edata[f(x)]表示数据项,由ILf(x,y)和IR(x,y)的对应坐标的像素值之间建立关系得到。βocclusion表示遮挡的数据项惩罚值,γocclusiondistance表示遮挡的距离惩罚值。Esmooth[f(x)]表示平滑项,用于确保遮挡的准确性。标签值f(X)=1表示遮挡像素点,f(X)=0时表示非遮挡像素点。图4示出了求第一遮挡图像的过程示意图,如图4所示,图4中的(a)展示了第一图像内的对应在重叠区域的子图像IL(x,y),图4中的(b)展示了第一图像的第一映射子图像ILf(x,y),图4中的(c)展示了ILf(x,y)与IR(x,y)之间的第一遮挡图像△I ILf(x,y)。其中,图4中的(c)中的白色像素点代表遮挡像素点。在本发明实施例中,在确定第一遮挡图像后,可以根据光流矢量W(u,v),将第一图像的第一遮挡图像映射到中间视点,得到第二遮挡图,并根据第二遮挡图像中的遮挡区域分布,确定第一图像在各个候选融合区域对应的光流矢量映射误差,图5示出了本例中的合并图像的方法的最终合并图像的效果图,从图5中可以看出,本发明实施例的方法可以有效地减少由于视差产生的模糊或重影。
可选地,作为一个实施例,本发明实施例的合并图像的方法100中,该用于指示遮挡的区域包括用于指示遮挡的像素点,该根据该每个候选融合区域对应在该第二遮挡图像中的区域中的用于指示遮挡的区域的分布,确定该每个候选融合区域对应的光流矢量映射误差,包括:确定该每个候选融合区域对应在该第二遮挡图像中的区域中的用于指示遮挡的像素点的像素值之和;将该像素值之和作为该每个候选融合区域对应的光流矢量映射误差。
具体地,图6示出了由前景物体产生的遮挡图像的示意图,该前景物体 可以是运动物体,其中,白色部分表示图像的背景,点状分布部分表示前景物体,斜线部分表示图6中的(a)和图6中的(b)相比较产生的遮挡图像,如图6所示,遮挡图可以指示两张图像内由于存在运动物体而产生的差异的大小,也可以指示由于存在视差而导致的前景物体的差异的大小。遮挡图中的遮挡像素点越多,表示产生遮挡图的两张图像之内的前景物体或者流场数据的差异越大,则融合这两张图像时可能产生的模糊或重影现象就越严重。可选地,可以设定遮挡图的遮挡像素值为1,非遮挡像素值为0。则可以通过计算候选融合区域内的所有像素值之和来确定光流矢量映射误差。
可选地,作为一个实施例,本发明实施例的合并图像的方法100中,该至少两个候选融合区域的面积之和小于该重叠区域的面积。
可选地,图7和图8示出了重叠区域的两种候选融合区域的划分方法的示意图,其中斜线的部分表示候选融合区域。如图7所示,这些候选融合区域互相之间可以是无缝相接排布的,即这些候选融合区域占的区域面积之和等于重叠区域的面积。可选地,如图8所示,这些候选融合区域互相之间也可以是间隔排布的,即在这些候选融合区域之间还存在非候选融合区域区域,也可以理解为这些候选融合区域所占的区域面积之和小于重叠区域的面积。图9示出了本发明实施例与现有技术之间的融合效果对比图。如图9所示,图9中的(a)示出了现有技术中采用加权值法得到的对应在重叠区域的子图像,图9中的(b)示出了本发明实施例的候选融合区域间隔排布的合并图像的方法得到的对应在重叠区域的子图像。由图9可以看出,与现有技术相比,本发明实施例的候选融合区域间隔排布的合并图像的方法可以有效的减少由于视差产生的模糊或重影。而且由于候选融合区域采用了间隔排布的方法,所以可以减少计算光流矢量映射误差和融合图像时的计算量,提高合成图像的效率和处理速度。
上文结合图1至图9详细阐述了本发明实施例的合并图像的方法的具体实施例,下文将结合图10和图11,详细描述本发明实施例的合并图像的装置。
图10示出了根据本发明实施例的合并图像的装置1000的示意图,应理解,本发明实施例的装置1000中的各个模块的下述和其他操作和/或功能分别为了实现图1至图9中的各个方法的相应流程,为了简洁,在此不再赘述,如图10所示,该装置1000包括:
获取模块1010,用于获取待合并的两张图像,该两张图像是分别从两个视点采集到的图像,该两张图像具有重叠区域;
确定模块1020,用于将该重叠区域划分出至少两个候选融合区域,该至少两个候选融合区域中的每个候选融合区域中存在将该两张图像的每张图像划分为不连通的两部分的线;基于该两张图像之间的光流矢量,确定该每个候选融合区域对应的光流矢量映射误差,其中,该每个候选融合区域对应的光流矢量映射误差用于指示在该每个候选融合区域中的,该两张图像各自的子图像,分别基于该光流矢量,对应在同一视点的图像之间的误差;
选择模块1030,用于根据该至少两个候选融合区域各自对应的光流矢量映射误差,从该至少两个候选融合区域中选取出该两个图像的目标融合区域;
融合模块1040,用于将该两张图像在该选择模块选取的该目标融合区域进行融合,从而得到该两张图像的合并图像。
本发明实施例中,待合并的图像的重叠区域被划分成多个融合区域,通过确定该多个候选融合区域各自对应的光流矢量映射误差,能够对待合并图像在重叠区域内的子图像的光流矢量映射(或称光流场映射)的误差进行估计,从而能够根据误差估计结果从候选融合区域中选择目标融合区域,将两张图像在目标融合区域进行融合,从而得到两张图像的合并图像,这样不但能够减少合并图像由于视差引起的模糊或重影,且能够节省图像合并的计算量。
可选地,作为一个实施例,该融合模块1040具体用于:基于该光流矢量,获取两张重叠子图像分别对应在该两个视点的中间视点的映射图像,该中间视点的坐标值为该两个视点的坐标值的均值,该两张重叠子图像为在该重叠区域中的该两张图像各自的重叠子图像;将两张映射图像在该目标融合区域进行融合,从而得到融合子图像;将该两张图像中的除该两张重叠子图像之外的子图像和该融合子图像进行拼接,从而得到该合并图像。
可选地,作为一个实施例,该两张图像包括第一图像和第二图像,该两张图像包括第一图像和第二图像,该第一图像对应第一视点,该第二图像对应第二视点,该确定模块具体用于:基于该光流矢量,获取该第一图像中的在该重叠区域的重叠子图像对应在该第二视点的图像;确定该对应在该第二视点的图像与该第二图像中的在该重叠区域的重叠子图像之间的第一遮挡图像;基于该光流矢量,获取该第一遮挡图像对应在该两个视点的中间视点 的第二遮挡图像;根据在该第二遮挡图像中的该每个候选融合区域中的用于指示遮挡的区域的信息,确定该每个候选融合区域对应的光流矢量映射误差。
可选地,作为一个实施例,该用于指示遮挡的区域包括用于指示遮挡的像素点,该确定模块1020具体用于:确定该每个候选融合区域对应在该第二遮挡图像中的区域中的用于指示遮挡的像素点的像素值之和;将该像素值之和作为该每个候选融合区域对应的光流矢量映射误差。
可选地,作为一个实施例,选择模块1030具体用于:将该至少两个候选融合区域中对应的光流矢量映射误差最小的候选融合区域确定为该两个图像的目标融合区域。
可选地,作为一个实施例,选择模块1030还具体用于:将该至少两个候选融合区域中对应的光流矢量映射误差小于预设阈值的候选融合区域确定为该两个图像的目标融合区域。
可选地,作为一个实施例,该至少两个候选融合区域的面积之和小于该重叠区域的面积。
图11示出了根据本发明实施例的合并图像的装置1100的示意图,如图11所示,该装置1100包括:处理器1110,存储器1120,总线系统1130,其中该处理器1100和该存储器1120通过该总线系统1130相连,该存储器1120用于存储指令,该处理器1110用于执行该存储器1120存储的指令。
其中,该处理器1110用于:获取待合并的两张图像,该两张图像是分别从两个视点采集到的图像,该两张图像具有重叠区域;将该重叠区域划分出至少两个候选融合区域,该至少两个候选融合区域中的每个候选融合区域中存在将该两张图像的每张图像划分为不连通的两部分的线;基于该两张图像之间的光流矢量,确定该每个候选融合区域对应的光流矢量映射误差,其中,该每个候选融合区域对应的光流矢量映射误差用于指示在该每个候选融合区域中的,该两张图像各自的子图像,分别基于该光流矢量,对应在同一视点的图像之间的误差;根据该至少两个候选融合区域各自对应的光流矢量映射误差,从该至少两个候选融合区域中选取出该两个图像的目标融合区域;将该两张图像在该目标融合区域进行融合,从而得到该两张图像的合并图像。
本发明实施例中,待合并的图像的重叠区域被划分成多个融合区域,通过确定该多个候选融合区域各自对应的光流矢量映射误差,能够对待合并图像在重叠区域内的子图像的光流矢量映射(或称光流场映射)的误差进行估 计,从而能够根据误差估计结果从候选融合区域中选择目标融合区域,将两张图像在目标融合区域进行融合,从而得到两张图像的合并图像,这样不但能够减少合并图像由于视差引起的模糊或重影,且能够节省图像合并的计算量。
应理解,在本发明实施例中,该处理器1110可以是中央处理单元(Central Processing Unit,简称为“CPU”),该处理器1110还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器1120可以包括只读存储器和随机存取存储器,并向处理器1110提供指令和数据。存储器1120的一部分还可以包括非易失性随机存取存储器。例如,存储器1120还可以存储设备类型的信息。
该总线系统1130除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。该总线系统1130还可以包括内部总线、系统总线和外部总线。但是为了清楚说明起见,在图中将各种总线都标为总线系统1130。
在实现过程中,上述方法的各步骤可以通过处理器1110中的硬件的集成逻辑电路或者软件形式的指令完成。结合本发明实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以对应在随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质对应在存储器1120,处理器1110读取存储器1120中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
可选地,作为一个实施例,该处理器1110具体用于:基于该光流矢量,获取两张重叠子图像分别对应在该两个视点的中间视点的映射图像,该中间视点的坐标值为该两个视点的坐标值的均值,该两张重叠子图像为在该重叠区域中的该两张图像各自的重叠子图像;将两张映射图像在该目标融合区域进行融合,从而得到融合子图像;将该两张图像中的除该两张重叠子图像之外的子图像和该融合子图像进行拼接,从而得到该合并图像。
可选地,作为一个实施例,该两张图像包括第一图像和第二图像,该第一图像对应第一视点,该第二图像对应第二视点,该处理器1110具体用于:基于该光流矢量,获取该第一图像中的在该重叠区域的重叠子图像对应在该 第二视点的图像;确定该对应在该第二视点的图像与该第二图像中的在该重叠区域的重叠子图像之间的第一遮挡图像;基于该光流矢量,获取该第一遮挡图像对应在该两个视点的中间视点的第二遮挡图像;根据在该第二遮挡图像中的该每个候选融合区域中的用于指示遮挡的区域的信息,确定该每个候选融合区域对应的光流矢量映射误差。
可选地,作为一个实施例,该用于指示遮挡的区域包括用于指示遮挡的像素点,该处理器1110具体用于:确定该每个候选融合区域对应在该第二遮挡图像中的区域中的用于指示遮挡的像素点的像素值之和;将该像素值之和作为该每个候选融合区域对应的光流矢量映射误差。
可选地,作为一个实施例,该处理器1110具体用于:将该至少两个候选融合区域中对应的光流矢量映射误差最小的候选融合区域确定为该两个图像的目标融合区域。
可选地,作为一个实施例,该处理器1110具体用于:将该至少两个候选融合区域中对应的光流矢量映射误差小于预设阈值的候选融合区域确定为该两个图像的目标融合区域。
可选地,作为一个实施例,该至少两个候选融合区域的面积之和小于该重叠区域的面积。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系 统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以对应在一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本发明实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
该集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例该方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上某一实施例中的技术特征和描述,为了使申请文件简洁清楚,可以理解适用于其他实施例,在其他实施例不再一一赘述。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (14)

  1. 一种合并图像的方法,其特征在于,包括:
    获取待合并的两张图像,所述两张图像是分别从两个视点采集到的图像,所述两张图像具有重叠区域;
    将所述重叠区域划分出至少两个候选融合区域,所述至少两个候选融合区域中的每个候选融合区域中存在将所述两张图像的每张图像划分为不连通的两部分的线;基于所述两张图像之间的光流矢量,确定所述每个候选融合区域对应的光流矢量映射误差,其中,所述每个候选融合区域对应的光流矢量映射误差用于指示在所述每个候选融合区域中的,所述两张图像各自的子图像,分别基于所述光流矢量,对应在同一视点的图像之间的误差;
    根据所述至少两个候选融合区域各自对应的光流矢量映射误差,从所述至少两个候选融合区域中选取出所述两个图像的目标融合区域;
    将所述两张图像在所述目标融合区域进行融合,从而得到所述两张图像的合并图像。
  2. 如权利要求1所述的方法,其特征在于,所述将所述两张图像在所述目标融合区域进行融合,从而得到所述两张图像的合并图像,包括:
    基于所述光流矢量,获取两张重叠子图像分别对应在所述两个视点的中间视点的映射图像,所述中间视点的坐标值为所述两个视点的坐标值的均值,所述两张重叠子图像为在所述重叠区域中的所述两张图像各自的重叠子图像;
    将两张映射图像在所述目标融合区域进行融合,从而得到融合子图像;
    将所述两张图像中的除所述两张重叠子图像之外的子图像和所述融合子图像进行拼接,从而得到所述合并图像。
  3. 如权利要求1或2所述的方法,其特征在于,所述两张图像包括第一图像和第二图像,所述第一图像对应第一视点,所述第二图像对应第二视点,
    所述基于所述两张图像之间的光流矢量,确定所述至少两个候选融合区域中的每个候选融合区域对应的光流矢量映射误差,包括:
    基于所述光流矢量,获取所述第一图像中的在所述重叠区域的重叠子图像对应在所述第二视点的图像;
    确定所述对应在所述第二视点的图像与所述第二图像中的在所述重叠 区域的重叠子图像之间的第一遮挡图像;
    基于所述光流矢量,获取所述第一遮挡图像对应在所述两个视点的中间视点的第二遮挡图像;
    根据在所述第二遮挡图像中的所述每个候选融合区域中的用于指示遮挡的区域的信息,确定所述每个候选融合区域对应的光流矢量映射误差。
  4. 如权利要求3所述的方法,其特征在于,所述用于指示遮挡的区域的信息包括用于指示遮挡的像素点的像素值,所述根据所述每个候选融合区域对应在所述第二遮挡图像中的区域中的用于指示遮挡的区域的分布,确定所述每个候选融合区域对应的光流矢量映射误差,包括:
    确定所述每个候选融合区域对应在所述第二遮挡图像中的区域中的用于指示遮挡的像素点的像素值之和;
    将所述像素值之和作为所述每个候选融合区域对应的光流矢量映射误差。
  5. 如权利要求1-4中任一项所述的方法,其特征在于,所述根据所述至少两个候选融合区域对应的光流矢量映射误差,从所述至少两个候选融合区域中选取所述两个图像的融合区域,包括:
    将所述至少两个候选融合区域中对应的光流矢量映射误差最小的候选融合区域确定为所述两个图像的目标融合区域。
  6. 如权利要求1-4中任一项所述的方法,其特征在于,所述根据所述至少两个候选融合区域对应的所述光流矢量映射误差,从所述至少两个候选融合区域中选取所述两个图像的融合区域,包括:
    将所述至少两个候选融合区域中对应的光流矢量映射误差小于预设阈值的候选融合区域确定为所述两个图像的目标融合区域。
  7. 如权利要求1-6中任一项所述的方法,其特征在于,所述至少两个候选融合区域的面积之和小于所述重叠区域的面积。
  8. 一种合并图像的装置,其特征在于,包括:
    获取模块,用于获取待合并的两张图像,所述两张图像是分别从两个视点采集到的图像,所述两张图像具有重叠区域;
    确定模块,用于将所述重叠区域划分出至少两个候选融合区域,所述至少两个候选融合区域中的每个候选融合区域中存在将所述两张图像的每张图像划分为不连通的两部分的线;基于所述两张图像之间的光流矢量,确定 所述每个候选融合区域对应的光流矢量映射误差,其中,所述每个候选融合区域对应的光流矢量映射误差用于指示在所述每个候选融合区域中的,所述两张图像各自的子图像,分别基于所述光流矢量,对应在同一视点的图像之间的误差;
    选择模块,用于根据所述至少两个候选融合区域各自对应的光流矢量映射误差,从所述至少两个候选融合区域中选取出所述两个图像的目标融合区域;
    融合模块,用于将所述两张图像在所述选择模块选取的所述目标融合区域进行融合,从而得到所述两张图像的合并图像。
  9. 如权利要求8所述的装置,其特征在于,所述融合模块具体用于:基于所述光流矢量,获取两张重叠子图像分别对应在所述两个视点的中间视点的映射图像,所述中间视点的坐标值为所述两个视点的坐标值的均值,所述两张重叠子图像为在所述重叠区域中的所述两张图像各自的重叠子图像;将两张映射图像在所述目标融合区域进行融合,从而得到融合子图像;将所述两张图像中的除所述两张重叠子图像之外的子图像和所述融合子图像进行拼接,从而得到所述合并图像。
  10. 如权利要求8或9所述的装置,其特征在于,所述两张图像包括第一图像和第二图像,所述第一图像对应第一视点,所述第二图像对应第二视点,所述确定模块具体用于:基于所述光流矢量,获取所述第一图像中的在所述重叠区域的重叠子图像对应在所述第二视点的图像;确定所述对应在所述第二视点的图像与所述第二图像中的在所述重叠区域的重叠子图像之间的第一遮挡图像;基于所述光流矢量,获取所述第一遮挡图像对应在所述两个视点的中间视点的第二遮挡图像;根据在所述第二遮挡图像中的所述每个候选融合区域中的用于指示遮挡的区域的信息,确定所述每个候选融合区域对应的光流矢量映射误差。
  11. 如权利要求10中所述的装置,其特征在于,所述用于指示遮挡的区域包括用于指示遮挡的像素点,所述确定模块具体用于:确定所述每个候选融合区域对应在所述第二遮挡图像中的区域中的用于指示遮挡的像素点的像素值之和;将所述像素值之和作为所述每个候选融合区域对应的光流矢量映射误差。
  12. 如权利要求8-11中任一项所述的装置,其特征在于,所述选择模块 具体用于:将所述至少两个候选融合区域中对应的光流矢量映射误差最小的候选融合区域确定为所述两个图像的目标融合区域。
  13. 如权利要求8-11中任一项所述的装置,其特征在于,所述选择模块具体用于:将所述至少两个候选融合区域中对应的光流矢量映射误差小于预设阈值的候选融合区域确定为所述两个图像的目标融合区域。
  14. 如权利要求8-13中任一项所述的装置,其特征在于,所述至少两个候选融合区域的面积之和小于所述重叠区域的面积。
PCT/CN2016/095880 2015-11-26 2016-08-18 合并图像的方法和装置 WO2017088533A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510845405.X 2015-11-26
CN201510845405.XA CN106803899B (zh) 2015-11-26 2015-11-26 合并图像的方法和装置

Publications (1)

Publication Number Publication Date
WO2017088533A1 true WO2017088533A1 (zh) 2017-06-01

Family

ID=58762957

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/095880 WO2017088533A1 (zh) 2015-11-26 2016-08-18 合并图像的方法和装置

Country Status (2)

Country Link
CN (1) CN106803899B (zh)
WO (1) WO2017088533A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766465A (zh) * 2018-12-26 2019-05-17 中国矿业大学 一种基于机器学习的图文融合图书推荐方法
CN111915483A (zh) * 2020-06-24 2020-11-10 北京迈格威科技有限公司 图像拼接方法、装置、计算机设备和存储介质
CN117853351A (zh) * 2023-11-01 2024-04-09 广州力加贺电子科技有限公司 一种基于摄像头阵列的拍照融合方法及装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274337B (zh) * 2017-06-20 2020-06-26 长沙全度影像科技有限公司 一种基于改进光流的图像拼接方法
CN107369129B (zh) * 2017-06-26 2020-01-21 深圳岚锋创视网络科技有限公司 一种全景图像的拼接方法、装置及便携式终端
CN109509146B (zh) * 2017-09-15 2023-03-24 腾讯科技(深圳)有限公司 图像拼接方法及装置、存储介质
CN108833785B (zh) * 2018-07-03 2020-07-03 清华-伯克利深圳学院筹备办公室 多视角图像的融合方法、装置、计算机设备和存储介质
CN109615593A (zh) * 2018-11-29 2019-04-12 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
JP7174252B2 (ja) * 2019-04-25 2022-11-17 日本電信電話株式会社 物体情報処理装置、物体情報処理方法及び物体情報処理プログラム
CN110648281B (zh) * 2019-09-23 2021-03-16 华南农业大学 田间全景图生成方法、装置、系统、服务器及存储介质
CN111124231B (zh) * 2019-12-26 2021-02-12 维沃移动通信有限公司 图片生成方法及电子设备
CN112783839A (zh) * 2020-06-08 2021-05-11 北京金山办公软件股份有限公司 一种存储文档的方法、装置、电子设备及存储介质
CN113469880A (zh) * 2021-05-28 2021-10-01 北京迈格威科技有限公司 图像拼接方法及装置、存储介质及电子设备
CN113592777B (zh) * 2021-06-30 2024-07-12 北京旷视科技有限公司 双摄拍照的图像融合方法、装置和电子系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146231A (zh) * 2007-07-03 2008-03-19 浙江大学 根据多视角视频流生成全景视频的方法
WO2008111080A1 (en) * 2007-03-15 2008-09-18 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for forming a panoramic image of a scene having minimal aspect distortion
CN101923709A (zh) * 2009-06-16 2010-12-22 日电(中国)有限公司 图像拼接方法与设备
CN104301630A (zh) * 2014-09-10 2015-01-21 天津航天中为数据系统科技有限公司 一种视频图像拼接方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129399B2 (en) * 2013-03-11 2015-09-08 Adobe Systems Incorporated Optical flow with nearest neighbor field fusion
CN103325108A (zh) * 2013-05-27 2013-09-25 浙江大学 一种融合光流与特征点匹配的单目视觉里程计的设计方法
CN105023278B (zh) * 2015-07-01 2019-03-05 中国矿业大学 一种基于光流法的运动目标跟踪方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008111080A1 (en) * 2007-03-15 2008-09-18 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for forming a panoramic image of a scene having minimal aspect distortion
CN101146231A (zh) * 2007-07-03 2008-03-19 浙江大学 根据多视角视频流生成全景视频的方法
CN101923709A (zh) * 2009-06-16 2010-12-22 日电(中国)有限公司 图像拼接方法与设备
CN104301630A (zh) * 2014-09-10 2015-01-21 天津航天中为数据系统科技有限公司 一种视频图像拼接方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766465A (zh) * 2018-12-26 2019-05-17 中国矿业大学 一种基于机器学习的图文融合图书推荐方法
CN111915483A (zh) * 2020-06-24 2020-11-10 北京迈格威科技有限公司 图像拼接方法、装置、计算机设备和存储介质
CN111915483B (zh) * 2020-06-24 2024-03-19 北京迈格威科技有限公司 图像拼接方法、装置、计算机设备和存储介质
CN117853351A (zh) * 2023-11-01 2024-04-09 广州力加贺电子科技有限公司 一种基于摄像头阵列的拍照融合方法及装置

Also Published As

Publication number Publication date
CN106803899B (zh) 2019-10-01
CN106803899A (zh) 2017-06-06

Similar Documents

Publication Publication Date Title
WO2017088533A1 (zh) 合并图像的方法和装置
CN107945112B (zh) 一种全景图像拼接方法及装置
WO2017091927A1 (zh) 图像处理方法和双摄像头系统
JP6371553B2 (ja) 映像表示装置および映像表示システム
WO2016110239A1 (zh) 图像处理方法和装置
CN107451952B (zh) 一种全景视频的拼接融合方法、设备以及系统
CN107274337B (zh) 一种基于改进光流的图像拼接方法
CN106997579B (zh) 图像拼接的方法和装置
WO2022170824A1 (zh) 图像拼接的处理方法、装置、电子系统和设备、可读介质
CN106981078B (zh) 视线校正方法、装置、智能会议终端及存储介质
CN111915483B (zh) 图像拼接方法、装置、计算机设备和存储介质
WO2019127269A1 (zh) 图像拼接方法、图像拼接装置及电子设备
CN111553841B (zh) 一种基于最佳缝合线更新的实时视频拼接方法
US20170316570A1 (en) Image processing apparatus and method
CN111292278B (zh) 图像融合方法及装置、存储介质、终端
Zhao et al. Cross-scale reference-based light field super-resolution
CN114143528A (zh) 多视频流融合方法、电子设备、存储介质
US20230394834A1 (en) Method, system and computer readable media for object detection coverage estimation
Pan et al. Depth map completion by jointly exploiting blurry color images and sparse depth maps
CN117612138A (zh) 一种车位检测方法、装置、设备及存储介质
US9392146B2 (en) Apparatus and method for extracting object
Zhao et al. Cross-camera deep colorization
CN115619636A (zh) 图像拼接方法、电子设备以及存储介质
CN115965531A (zh) 模型训练方法及图像生成方法、装置、设备和存储介质
Yuan et al. Fast image blending and deghosting for panoramic video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867760

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867760

Country of ref document: EP

Kind code of ref document: A1