WO2017000484A1 - 用于用户终端的全景图像生成方法和装置 - Google Patents

用于用户终端的全景图像生成方法和装置 Download PDF

Info

Publication number
WO2017000484A1
WO2017000484A1 PCT/CN2015/095070 CN2015095070W WO2017000484A1 WO 2017000484 A1 WO2017000484 A1 WO 2017000484A1 CN 2015095070 W CN2015095070 W CN 2015095070W WO 2017000484 A1 WO2017000484 A1 WO 2017000484A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
adjacent
images
color
optimization
Prior art date
Application number
PCT/CN2015/095070
Other languages
English (en)
French (fr)
Inventor
谢国富
艾锐
刘丽
侯文博
郎咸朋
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Priority to JP2017565747A priority Critical patent/JP6605049B2/ja
Priority to KR1020177031584A priority patent/KR101956151B1/ko
Priority to US15/739,801 priority patent/US10395341B2/en
Priority to EP15897011.1A priority patent/EP3319038A4/en
Publication of WO2017000484A1 publication Critical patent/WO2017000484A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a panoramic image generation method and apparatus for a user terminal.
  • Panoramas can be captured with wide-angle shots, but are limited by the shooting hardware, and more panoramas are stitched together from multiple images to show as much of the surrounding environment as possible.
  • a panoramic image is usually obtained by splicing multiple images in one of several ways: one is a general splicing method, which uses Scale-invariant feature transform (SIFT) features and beam method adjustment ( Bundle adjustment) optimizes image stitching, and the other is the stitching method for mobile phones. It can be divided into using the built-in sensor of the mobile phone to record the running track of the mobile phone to accelerate the image stitching, and to improve the stitching by color and illumination compensation of the overlapping area. Image Quality.
  • SIFT Scale-invariant feature transform
  • Bundle adjustment Bundle adjustment
  • the present invention aims to solve at least one of the technical problems in the related art to some extent.
  • an object of the present invention is to provide a panoramic image generating method for a user terminal, which can improve image stitching speed.
  • Another object of the present invention is to provide a panoramic image generating apparatus for a user terminal.
  • the method for generating a panoramic image for a user terminal includes: acquiring a plurality of images captured by a user terminal, determining an adjacent relationship between the multiple images, and Performing feature matching on the image to obtain matching feature point pairs; obtaining optimized camera parameters according to the matched feature point pairs and initial camera parameters; performing color adjustment on adjacent images to obtain color-adjusted adjacent images; The optimized camera parameters are spliced to the color-adjusted adjacent images to generate a panoramic image.
  • a panoramic image generating method for a user terminal proposed by the first aspect of the present invention by determining a plurality of images
  • the adjacent relationship between the two and the feature extraction of the adjacent images can meet the accuracy requirements and reduce the workload of feature extraction, thereby improving the image stitching speed.
  • a panoramic image generating apparatus for a user terminal includes: a matching module, configured to acquire a plurality of images captured by a user terminal, and determine an adjacent relationship between the multiple images. And performing feature matching on adjacent images to obtain matching feature point pairs; an optimization module, configured to obtain optimized camera parameters according to the matched feature point pairs and initial camera parameters; and an adjustment module for adjacent The image is color-adjusted to obtain a color-adjusted adjacent image; the splicing module is configured to splicing the color-adjusted adjacent image according to the optimized camera parameter to generate a panoramic image.
  • the panoramic image generating apparatus for a user terminal proposed by the second aspect of the present invention can satisfy the accuracy requirement and reduce the feature extraction by determining the adjacent relationship between the multiple images and extracting the features of the adjacent images.
  • An embodiment of the present invention further provides a user terminal, including: one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory when the one or more When the processor is executed: the method according to any of the first aspect of the invention is performed.
  • Embodiments of the present invention also provide a non-volatile computer storage medium having one or more modules stored when the one or more modules are executed: performing the first aspect of the present invention.
  • FIG. 1 is a schematic flowchart of a method for generating a panoramic image for a user terminal according to an embodiment of the present invention
  • FIG. 5 is a schematic flowchart of S14 in the embodiment of the present invention.
  • FIG. 6a is a schematic diagram of an image stitched in an embodiment of the present invention.
  • 6b is a schematic diagram of a mask map corresponding to an image in an embodiment of the present invention.
  • Figure 6c is a schematic view of a mosaic after splicing in the embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a panoramic image generating apparatus for a user terminal according to another embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a panoramic image generating apparatus for a user terminal according to another embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a method for generating a panoramic image for a user terminal according to an embodiment of the present invention, the method includes:
  • S11 Acquire multiple images captured by the user terminal, determine an adjacent relationship between the multiple images, perform feature matching on the adjacent images, and obtain matching feature point pairs;
  • the camera of the user terminal can be used to photograph the surrounding environment, and multiple images under different states (such as angle, illumination, etc.) can be obtained.
  • adjacent images in order to reduce the amount of computation, only the matching relationship between adjacent images is calculated, and the matching relationship between non-adjacent images is not operated. Therefore, adjacent images can be determined first.
  • the user terminal is a mobile device as an example.
  • FIG. 2 it is an implementation flowchart of S11, where determining adjacent images from the multiple images, including:
  • S21 Acquire, according to each image, information of a sensor disposed in the mobile device when the image is captured, and determine an adjacent image according to the information of the sensor.
  • sensors such as a gyroscope, a geomagnetic instrument, or a gravity sensor.
  • the information of the sensor corresponding to the image is recorded, and then the adjacent image can be determined based on the information.
  • the angle of the first image is a first angle
  • the angle of the second image is a second angle
  • the angle of the third image is a third angle, assuming that the difference between the first angle and the second angle is less than the third angle and the first
  • the difference in angles may determine that the first image is adjacent to the second image.
  • the above manner of determining the adjacent image is only a simplified example, and can also be implemented in other manners.
  • the mobile device's own resources can be fully utilized, and the adjacent image can be conveniently and quickly determined.
  • feature extraction may be performed separately for each of the adjacent images to complete feature matching.
  • performing feature matching on adjacent images to obtain matching feature point pairs includes:
  • S23 Divide the adjacent image into a preset number of regions, and in each region, extract feature points whose number is less than a preset value.
  • each image in an adjacent image can be divided into four regions, each region extracting a limited number of feature points.
  • the feature points may specifically be SIFT feature points.
  • a random sample Consensus (RANSAC) algorithm is used to match SIFT feature points in adjacent images to obtain matching feature point pairs.
  • the amount of calculation can be reduced, thereby increasing the splicing speed.
  • the image may be pre-processed prior to feature point extraction, and/or some feature point pairs that match the error may be deleted after the feature point pair is matched.
  • FIG. 2 it may also include:
  • Pre-processing the adjacent image may include: color enhancement, and/or size scaling.
  • the color enhancement refers to enhancing the dark image, for example, if the brightness value of the pixels of one image is less than the preset value, the brightness value may be increased by a preset increment.
  • Scaling refers to scaling the original image to a size suitable for mobile device processing, where the size can be determined based on the type of mobile device.
  • the pre-processed image can be subjected to feature extraction and feature matching.
  • feature extraction and feature matching For details, refer to the related description above, and details are not described herein.
  • the processing effect can be improved and adapted to the mobile device.
  • the matching feature point pairs are obtained by using RANSAC, there may be a pair of feature points that match the errors, and therefore, the feature point pairs that match the errors may be removed later.
  • the method may further include:
  • S25 Perform filtering processing on the matched feature point pairs to remove the feature point pairs that match the errors.
  • a heuristic algorithm can be used for filtering processing.
  • a and B can be connected to obtain a line segment AB.
  • line segments CD, EF, etc. can be obtained.
  • the line segments are compared, and the feature point pairs corresponding to the line segments of substantially the same length and substantially parallel relationship are reserved, and other feature point pairs are removed.
  • the removed feature point pairs usually include: matching the wrong SIFT feature point pairs, usually these matching wrong SIFT feature point pairs appear to not satisfy the basic parallel, and the wrong RANSAC inliers, usually these wrong RANSAC
  • the inner point does not satisfy the line segment length is basically the same.
  • the matched feature point pairs after the filtering process are obtained, and then, in the subsequent processing, the matched feature point pairs used refer to the matched feature point pairs after the filtering process.
  • the accuracy of the matched feature point pairs can be improved, thereby improving the accuracy of image stitching.
  • the global camera parameter optimization can be performed by using a beam adjustment algorithm.
  • the bundle adjustment algorithm is a nonlinear equation that optimizes camera parameters by minimizing the projection difference of matching feature point pairs between all images.
  • the bundle adjustment algorithm is solved using the Levenberg-Marquardt iterative algorithm.
  • the nonlinear optimization is very sensitive to the initial value. If the given initial value is not good, only the local optimal solution can be solved, resulting in a misalignment or ghosting of the panorama after splicing.
  • the process of globally optimizing camera parameters may include:
  • the camera parameters may include: a focal length, a rotation matrix, and the like.
  • the initial camera parameters adopt random initial values, and experimental verification shows that the initial camera parameters obtained according to the sensor information of the embodiment can obtain a better optimal solution, thereby reducing stitching ghosting and misalignment.
  • the Levenberg-Marquardt iterative algorithm is used to solve the optimized camera parameters.
  • the method may further include:
  • the failure solution is as follows:
  • the predetermined ideal camera parameters are re-optimized as initial camera parameters
  • the ideal camera parameters may be determined by determining the image format of each layer of the user terminal (such as a mobile device), and then estimating the angle of each image on average, and obtaining an ideal camera parameter according to the angle.
  • the re-optimization fails, and the plurality of images are divided into three layers, the image of the lowermost layer is removed, and the images of the upper two layers are used for optimization;
  • the feature point pairs corresponding to a part of the image may be removed.
  • the image of the bottom layer can be removed first, and the images of the upper two layers are optimized to ensure that the images of the upper two layers are correctly stitched.
  • the optimization of the image of the upper two layers may be optimized according to the matched feature point pairs of the above two layers of images and the initial camera parameters.
  • the initial camera parameters may also be preferentially determined according to the sensor information in the embodiment. The initial camera parameters are then selected for the ideal camera parameters.
  • the initial camera parameters may also preferentially adopt the initial camera parameters determined according to the sensor information in the embodiment, and then select the ideal camera parameters.
  • the accuracy of the optimal solution can be improved, thereby improving the splicing effect.
  • the robustness can be enhanced by adopting the processing scheme after the optimization failure described above.
  • the parameters of each picture are recalculated. Changes in lighting in different areas of the scene can result in different exposures of adjacent pictures. At the same time, objects of different colors in different parts of the scene can also affect the white balance setting, resulting in the same object showing different appearances in adjacent pictures, some brighter and somewhat darker. If there is no extra color and lighting treatment, color unevenness will occur in the overlapping areas of the panorama, so it is necessary to compensate the original image for color and illumination before splicing them.
  • the color adjustment of the adjacent image is performed to obtain a color-adjusted image, including:
  • the algorithm for determining the overlapping area of the two images can be implemented by the prior art.
  • S42 Determine a correction parameter that minimizes a difference between the two sets of pixel values according to pixel values of two sets of pixel points of the adjacent image in the overlapping area.
  • the pixel value of the pixel in the D region of A and the pixel value of the pixel in the D region of B may be acquired, and then may be adopted.
  • the least squares algorithm calculates the correction parameters corresponding to A and B respectively when the difference between the two sets of pixel values is the smallest, and the correction parameter is, for example, a gamma correction parameter.
  • S43 Perform color adjustment on the adjacent image by using the calibration parameter to obtain a color-adjusted adjacent image.
  • color correction can be performed on A using the correction parameters of A, and color correction is performed on B using the correction parameters of B.
  • the color unevenness of the panorama can be solved by color correction.
  • adjacent images may be stitched to generate a panorama.
  • the splicing of the color-adjusted adjacent images according to the optimized camera parameters to generate a panoramic image includes:
  • S51 Perform a reduction process on the adjacent image after the color adjustment to obtain a reduced image.
  • each of the adjacent images can be reduced to the original 1/8.
  • determining the seam in each image according to the adjacent image may adopt the prior art.
  • the prior art generally adopts an image of an original size, and in this embodiment, by narrowing the image, the seam is determined on the reduced image, and the seam is determined on the low-resolution image. Can reduce the workload and save time.
  • S53 Generate a mask map corresponding to each image according to the optimized camera parameter and the seam, the mask diagram comprising: a component and an interface.
  • the position of each image in the panorama may be determined according to the optimized camera parameters, and after the position and the seam determination, a mask image of the image may be determined, the mask diagram consisting of the component and the connecting portion .
  • Fig. 6a four images for image stitching are shown in Fig. 6a, and the mask map corresponding to the four images may be as shown in Fig. 6b, wherein the component is a white area and the joint portion is a black area.
  • S54 Perform multi-layer hybrid fusion on the connecting portion of the adjacent image to obtain a merged portion, and form the panoramic portion of the component portion and the merged portion.
  • the components in the mask map can be directly used as the corresponding parts of the panorama. Since the interfaces overlap on adjacent images, processing is required.
  • the existing multi-layer hybrid method can be used.
  • the composition is composed of components and merged parts.
  • the size of the mask map is selected to be consistent with the size of the panorama (that is, the resolution is consistent), so that the panorama and the merged portion can be directly composed of the panorama.
  • the stitched panorama can be as shown in Figure 6c.
  • the mask map selection is consistent with the size of the panorama
  • the panorama since the panorama is spliced in a three-dimensional manner during the splicing process and displayed in two dimensions during the presentation, there may be an image mask during the presentation.
  • the joint portion of the figure is large, and the components are dispersed at both ends of the joint portion. At this point, you can also do the following:
  • the mask map is divided into two small maps for processing each small image separately.
  • each thumbnail is processed as a separate image.
  • each mask map can be reduced by 1 to 2 seconds, and generally there are 7 to 8 mask maps. Therefore, the stitching speed is There can be a more obvious improvement.
  • the above embodiments of the present invention can be specifically applied to image stitching of a mobile phone. It can be understood that the above solution of the present invention can also be applied to other mobile devices, for example, in the splicing of the in-vehicle system. Further, in the car In other scenarios, specific parameters can be adjusted according to actual conditions.
  • the accuracy requirement can be met and the workload of feature extraction can be reduced, and the image stitching speed can be improved.
  • the foregoing method of the embodiment may be particularly applicable to a mobile device, and can perform high-speed and high-quality on a mobile device through improvement of the above image preprocessing, feature matching, global optimization, global color adjustment, and multi-layer mixing. Panoramic stitching of layer photos. According to experiments, the splicing success rate is above 80% and the speed is within 40 seconds.
  • the solution of this embodiment can be applied to different kinds of user terminals to improve their performance.
  • FIG. 7 is a schematic structural diagram of a panoramic image generating apparatus for a user terminal according to another embodiment of the present invention.
  • the apparatus may be located in a user terminal, and the apparatus 70 includes:
  • the matching module 71 is configured to acquire a plurality of images captured by the user terminal, determine an adjacent relationship between the multiple images, and perform feature matching on the adjacent images to obtain a matching feature point pair;
  • the camera of the user terminal can be used to photograph the surrounding environment, and multiple images under different states (such as angle, illumination, etc.) can be obtained.
  • adjacent images in order to reduce the amount of computation, only the matching relationship between adjacent images is calculated, and the matching relationship between non-adjacent images is not operated. Therefore, adjacent images can be determined first.
  • the matching module 71 is configured to determine an adjacent image from the multiple images, including:
  • the information of the sensor disposed in the mobile device is acquired, and the adjacent image is determined according to the information of the sensor.
  • sensors such as a gyroscope, a geomagnetic instrument, or a gravity sensor.
  • the information of the sensor corresponding to the image is recorded, and then the adjacent image can be determined based on the information.
  • the angle of the first image is a first angle
  • the angle of the second image is a second angle
  • the angle of the third image is a third angle, assuming that the difference between the first angle and the second angle is less than the third angle and the first
  • the difference in angles may determine that the first image is adjacent to the second image.
  • the above manner of determining the adjacent image is only a simplified example, and can also be implemented in other manners.
  • the mobile device's own resources can be fully utilized, and the adjacent image can be conveniently and quickly determined.
  • feature extraction may be performed separately for each of the adjacent images to complete feature matching.
  • the matching module 71 is configured to perform feature matching on the adjacent image to obtain a matching feature point pair, including:
  • each image in an adjacent image can be divided into four regions, each region extracting a limited number of feature points.
  • the feature points may specifically be SIFT feature points.
  • Feature matching is performed according to feature points extracted in adjacent images, and matched feature point pairs are obtained.
  • a random sample Consensus (RANSAC) algorithm is used to match SIFT feature points in adjacent images to obtain matching feature point pairs.
  • the amount of calculation can be reduced, thereby increasing the splicing speed.
  • the matching module is further configured to pre-process the image before the feature point is extracted, and/or delete some feature point pairs that match the error after the feature point pair is matched.
  • An optimization module 72 configured to obtain optimized camera parameters according to the matched feature point pairs and initial camera parameters
  • the global camera parameter optimization can be performed by using a beam adjustment algorithm.
  • the bundle adjustment algorithm is a nonlinear equation that optimizes camera parameters by minimizing the projection difference of matching feature point pairs between all images.
  • the bundle adjustment algorithm is solved using the Levenberg-Marquardt iterative algorithm.
  • the nonlinear optimization is very sensitive to the initial value. If the given initial value is not good, only the local optimal solution can be solved, resulting in a misalignment or ghosting of the panorama after splicing.
  • the apparatus 70 further includes: a determining module 75 for determining the initial value, the determining module 75 is specifically configured to:
  • the camera parameters may include: a focal length, a rotation matrix, and the like.
  • the initial camera parameters adopt random initial values, and experimental verification shows that the initial camera parameters obtained according to the sensor information of the embodiment can obtain a better optimal solution, thereby reducing stitching ghosting and misalignment.
  • the apparatus 70 further includes:
  • the processing module 76 is configured to re-optimize the predetermined ideal camera parameter as the initial camera parameter if the camera parameter optimization fails; if the re-optimization fails, and the image is divided into three layers, the image of the lowermost layer is removed, Use the above two layers of images for optimization; if the image optimization using the above two layers fails, only the image of the middle layer is used for optimization.
  • the ideal camera parameters may be determined by determining the image format of each layer of the user terminal (such as a mobile device), and then estimating the angle of each image on average, and obtaining an ideal camera parameter according to the angle.
  • the ideal camera parameter is also optimized as the initial camera parameter, the corresponding feature of a part of the image can be removed.
  • Point to point For example, when shooting with a mobile phone, three layers of images are taken. In this embodiment, the image of the bottom layer can be removed first, and the images of the upper two layers are optimized to ensure that the images of the upper two layers are correctly stitched.
  • the camera parameters may include: a focal length, a rotation matrix, and the like.
  • the initial camera parameters may also preferentially select the initial values determined according to the sensor information in the above embodiment, and then select the ideal camera parameters. For example, if the above two layers cannot be optimized, then only the image of the middle layer can be optimized to ensure that the image of the middle layer is correctly stitched. Therefore, at this time, the feature point pairs corresponding to the image of the middle layer and the initial camera parameters can be used.
  • the operation is performed, and during the operation, the initial camera parameters may also preferentially adopt the initial camera parameters determined according to the sensor information in the embodiment, and then select the ideal camera parameters.
  • the accuracy of the optimal solution can be improved, thereby improving the splicing effect.
  • the robustness can be enhanced by adopting the processing scheme after the optimization failure described above.
  • the adjusting module 73 is configured to perform color adjustment on the adjacent image to obtain a color-adjusted adjacent image
  • the parameters of each picture are recalculated. Changes in lighting in different areas of the scene can result in different exposures of adjacent pictures. At the same time, objects of different colors in different parts of the scene can also affect the white balance setting, resulting in the same object showing different appearances in adjacent pictures, some brighter and somewhat darker. If there is no extra color and lighting treatment, color unevenness will occur in the overlapping areas of the panorama, so it is necessary to compensate the original image for color and illumination before splicing them.
  • the adjusting module 73 is specifically configured to:
  • the algorithm for determining the overlapping area of the two images can be implemented by the prior art.
  • the pixel value of the pixel in the D region of A and the pixel value of the pixel in the D region of B may be acquired, and then may be adopted.
  • the least squares algorithm calculates the correction parameters corresponding to A and B respectively when the difference between the two sets of pixel values is the smallest, and the correction parameter is, for example, a gamma correction parameter.
  • the adjacent image is color-adjusted by using the correction parameter to obtain a color-adjusted adjacent image.
  • color correction can be performed on A using the correction parameters of A, and color correction is performed on B using the correction parameters of B.
  • the color unevenness of the panorama can be solved by color correction.
  • the splicing module 74 is configured to splicing the color-adjusted adjacent images according to the optimized camera parameters to generate a panoramic image.
  • adjacent images may be stitched to generate a panorama.
  • the splicing module 74 is specifically configured to:
  • each of the adjacent images can be reduced to the original 1/8.
  • determining the seam in each image according to the adjacent image may adopt the prior art.
  • the prior art generally adopts an image of an original size, and in this embodiment, by narrowing the image, the seam is determined on the reduced image, and the seam is determined on the low-resolution image. Can reduce the workload and save time.
  • the position of each image in the panorama may be determined according to the optimized camera parameters, and after the position and the seam determination, a mask image of the image may be determined, the mask diagram consisting of the component and the connecting portion .
  • Fig. 6a four images for image stitching are shown in Fig. 6a, and the mask map corresponding to the four images may be as shown in Fig. 6b, wherein the component is a white area and the joint portion is a black area.
  • the merging portion of the adjacent image is subjected to multi-layer hybrid fusion to obtain a fused portion, and the component portion and the fused portion are combined into a panoramic view.
  • the components in the mask map can be directly used as the corresponding parts of the panorama. Since the interfaces overlap on adjacent images, processing is required.
  • the existing multi-layer hybrid method can be used.
  • the composition is composed of components and merged parts.
  • the size of the mask map is selected to be consistent with the size of the panorama (that is, the resolution is consistent), so that the panorama and the merged portion can be directly composed of the panorama.
  • the stitched panorama can be as shown in Figure 6c.
  • the splicing module 74 is further configured to:
  • the mask map is divided into two small maps for processing each small image separately.
  • each thumbnail is processed as a separate image.
  • each mask map can be reduced by 1 to 2 seconds, and generally there are 7 to 8 mask maps. Therefore, the stitching speed is There can be a more obvious improvement.
  • the above-mentioned embodiments of the present invention can be specifically applied to image stitching of a mobile phone. It can be understood that the above solution of the present invention can also be applied to other user terminals, for example, in the splicing of the in-vehicle system. Further, in the car In other scenarios, specific parameters can be adjusted according to actual conditions.
  • the accuracy requirement can be met and the workload of feature extraction can be reduced, and the image stitching speed can be improved.
  • the foregoing method of the embodiment may be particularly applicable to a mobile device, and can perform high-speed and high-quality on a mobile device through improvement of the above image preprocessing, feature matching, global optimization, global color adjustment, and multi-layer mixing. Panoramic stitching of layer photos. According to experiments, the splicing success rate is above 80% and the speed is within 40 seconds.
  • the solution of this embodiment can be applied to different kinds of user terminals to improve their performance.
  • An embodiment of the present invention further provides a user terminal, including: one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory when the one or more When the processor is executed: the method according to any of the first aspect of the invention is performed.
  • Embodiments of the present invention also provide a non-volatile computer storage medium having one or more modules stored when the one or more modules are executed: performing the first aspect of the present invention The method of any of the preceding claims.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated module is It can be implemented in the form of hardware or in the form of a software function module.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Image Analysis (AREA)

Abstract

一种用于用户终端的全景图像生成方法和装置,该用于用户终端的全景图像生成方法包括:获取用户终端拍摄的多张图像,确定多张图像之间的相邻关系,并对相邻图像进行特征匹配,获取匹配的特征点对;根据所述匹配的特征点对和初始相机参数,得到优化后的相机参数;对相邻图像进行颜色调整,得到颜色调整后的相邻图像;根据所述优化后的相机参数,对所述颜色调整后的相邻图像进行拼接,生成全景图。该方法能够提高图像拼接速度。

Description

用于用户终端的全景图像生成方法和装置
相关申请的交叉引用
本申请要求百度在线网络技术(北京)有限公司于2015年6月30日提交的、发明名称为“用于用户终端的全景图像生成方法和装置”的、中国专利申请号“201510377251.6的优先权。
技术领域
本发明涉及图像处理技术领域,尤其涉及一种用于用户终端的全景图像生成方法和装置。
背景技术
全景图可以通过广角拍摄获取,但受限于拍摄硬件,更多的全景图是由多张图像拼接得到,以尽可能多表现出周围的环境。现有技术中通常采用如下几种方式由多个图像拼接得到全景图像:一种是通用拼接方法,该方法使用尺度不变特征转换(Scale-invariant feature transform,SIFT)特征和光束法平差(bundle adjustment)优化进行图像拼接,另一种是针对手机的拼接方法,可分为使用手机内置的传感器记录手机运行轨迹加速图像拼接,以及,通过对重叠区域进行颜色和光照补偿,以提高拼接的图像质量。
但是,现有的通用拼接方法和现有的针对手机的拼接方法拼接速度都比较慢。
发明内容
本发明旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本发明的一个目的在于提出一种用于用户终端的全景图像生成方法,该方法可以提高图像拼接速度。
本发明的另一个目的在于提出一种用于用户终端的全景图像生成装置。
为达到上述目的,本发明第一方面实施例提出的用于用户终端的全景图像生成方法,包括:获取用户终端拍摄的多张图像,确定多张图像之间的相邻关系,并对相邻图像进行特征匹配,获取匹配的特征点对;根据所述匹配的特征点对和初始相机参数,得到优化后的相机参数;对相邻图像进行颜色调整,得到颜色调整后的相邻图像;根据所述优化后的相机参数,对所述颜色调整后的相邻图像进行拼接,生成全景图。
本发明第一方面实施例提出的用于用户终端的全景图像生成方法,通过确定多张图像 之间的相邻关系,并对相邻图像进行特征提取,可以满足准确度要求并可以降低特征提取的工作量,从而可以提高图像拼接速度。
为达到上述目的,本发明第二方面实施例提出的用于用户终端的全景图像生成装置,包括:匹配模块,用于获取用户终端拍摄的多张图像,确定多张图像之间的相邻关系,并对相邻图像进行特征匹配,获取匹配的特征点对;优化模块,用于根据所述匹配的特征点对和初始相机参数,得到优化后的相机参数;调整模块,用于对相邻图像进行颜色调整,得到颜色调整后的相邻图像;拼接模块,用于根据所述优化后的相机参数,对所述颜色调整后的相邻图像进行拼接,生成全景图。
本发明第二方面实施例提出的用于用户终端的全景图像生成装置,通过确定多张图像之间的相邻关系,并对相邻图像进行特征提取,可以满足准确度要求并可以降低特征提取的工作量,从而可以提高图像拼接速度。
本发明实施例还提出了一种用户终端,包括:一个或者多个处理器;存储器;一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时:执行如本发明第一方面实施例任一项所述的方法。
本发明实施例还提出了一种非易失性计算机存储介质,所述计算机存储介质存储有一个或者多个模块,当所述一个或者多个模块被执行时:执行如本发明第一方面实施例任一项所述的方法。本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是本发明一实施例提出的用于用户终端的全景图像生成方法的流程示意图;
图2是本发明实施例中S11的流程示意图;
图3是本发明实施例中S12的流程示意图;
图4是本发明实施例中S13的流程示意图;
图5是本发明实施例中S14的流程示意图;
图6a是本发明实施例中进行拼接的图像的示意图;
图6b是本发明实施例中图像对应的mask图的示意图;
图6c是本发明实施例中拼接后的全景图的示意图;
图7是本发明另一实施例提出的用于用户终端的全景图像生成装置的结构示意图;
图8是本发明另一实施例提出的用于用户终端的全景图像生成装置的结构示意图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的模块或具有相同或类似功能的模块。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。相反,本发明的实施例包括落入所附加权利要求书的精神和内涵范围内的所有变化、修改和等同物。
图1是本发明一实施例提出的用于用户终端的全景图像生成方法的流程示意图,该方法包括:
S11:获取用户终端拍摄的多张图像,确定多张图像之间的相邻关系,并对相邻图像进行特征匹配,获取匹配的特征点对;
其中,可以采用用户终端的摄像头对周围环境进行拍摄,得到不同状态(如角度,光照等)下的多张图像。
一个实施例中,为了降低运算量,只计算相邻图像之间的匹配关系,而不对不相邻的图像之间的匹配关系进行运算。因此,可以先确定出相邻图像。
可选的,以用户终端是移动设备为例,参见图2,是S11的实现流程图,其中的从所述多张图像中确定相邻图像,包括:
S21:对应每张图像,获取拍摄所述图像时,设置在所述移动设备内的传感器的信息,根据所述传感器的信息确定相邻图像。
其中,传感器例如陀螺仪、地磁仪或者重力感应器等。
例如,在每次拍摄一张图像时,记录该图像对应的传感器的信息,之后可以根据该信息确定出相邻图像。例如,第一图像的角度是第一角度,第二图像的角度是第二角度,第三图像的角度是第三角度,假设第一角度与第二角度的差值小于第三角度与第一角度的差值,则可以确定第一图像与第二图像相邻。当然,可以理解的是,上述确定相邻图像的方式只是一种简化示例,还可以采用其他方式实现。
本实施例中,通过采用移动设备内的装置可以充分利用移动设备自身资源,方便快捷的确定出相邻图像。
在确定出相邻图像后,可以对相邻图像中的每张图像分别进行特征提取,以完成特征匹配。
一个实施例中,参见图2,其中的对相邻图像进行特征匹配,获取匹配的特征点对,包括:
S23:将相邻图像划分为预设个数的区域,并在每个区域内,提取个数小于预设值的特征点。
例如,可以将相邻图像中的每张图像分为四个区域,每个区域提取限定个数的特征点。
特征点可以具体是SIFT特征点。
S24:根据相邻图像内提取的特征点,进行特征匹配,得到匹配的特征点对。
例如,采用随机抽样一致(Random Sample Consensus,RANSAC)算法,对相邻图像中的SIFT特征点进行匹配,得到匹配的特征点对。
本实施例中,通过限定特征点的个数,可以降低运算量,从而提高拼接速度。
一个实施例中,在特征点提取之前可以先对图像进行预处理,和/或,在特征点对匹配之后再删除一些匹配错误的特征点对。
参见图2,还可以包括:
S22:对相邻图像进行预处理,所述预处理可以包括:颜色增强,和/或,尺寸缩放。
其中,颜色增强是指对偏暗的图像进行增强,例如,如果一张图像的像素的亮度值小于预设值,则可以将该亮度值增加预设的增量。
尺寸缩放是指将原始图像缩放成适用于移动设备处理的尺寸,其中,该尺寸可以根据移动设备的类别不同而确定。
在预处理之后,可以对预处理后的图像进行特征提取和特征匹配,具体可以参见上述相关描述,在此不再赘述。
本实施例中,通过预处理,可以提高处理效果,并适应于移动设备。
在采用RANSAC得到匹配的特征点对后,可能存在匹配错误的特征点对,因此,之后还可以去除匹配错误的特征点对。
一个实施例中,参见图2,还可以包括:
S25:对所述匹配的特征点对进行过滤处理,去除匹配错误的特征点对。
其中,可以采用启发式算法进行过滤处理。
例如,假设两对特征点对分别是A和B,C和D,以及,E和F等,则可以连接A和B,得到线段AB,类似的,还可以得到线段CD,EF等。之后,比较这些线段,将基本相同长度的,且基本是平行关系的线段对应的特征点对进行保留,去除其他特征点对。其中,去除的特征点对通常包括:匹配错误的SIFT特征点对,通常这些匹配错误的SIFT特征点对表现为不满足基本平行,以及,错误的RANSAC内点(inliers),通常这些错误的RANSAC内点不满足线段长度基本相同。
通过过滤处理,得到过滤处理后的匹配的特征点对,之后,在后续处理时,采用的匹配的特征点对是指过滤处理后的匹配的特征点对。
本实施例中,通过过滤处理,可以提高匹配的特征点对的准确度,从而提高图像拼接的准确度。
S12:根据所述匹配的特征点对和初始相机参数,得到优化后的相机参数。
其中,可以采用光束法平差(bundle adjustment)算法进行全局相机参数优化。
bundle adjustment算法是一个非线性方程,通过使得所有图像之间匹配的特征点对的投影差值最小,而优化相机参数。
bundle adjustment算法使用Levenberg-Marquardt迭代算法进行求解。非线性优化对初始值非常敏感,如果给定的初始值不好,只能求解到局部的最优解,导致拼接之后的全景图有错位或者重影。
为此,一个实施例中,参见图3,全局优化相机参数的流程可以包括:
S31:确定所述初始相机参数,所述确定所述初始相机参数包括:
获取在拍摄每张图像时,设置在所述移动设备内的传感器的信息,并根据所述信息确定每张图像的初始相机参数。
其中,相机参数可以包括:焦距,旋转矩阵等。
现有技术中,初始相机参数采用随机初始值,通过实验验证,采用本实施例的根据传感器信息得到的初始相机参数可以得到更好的最优解,从而减少拼接重影和错位。
S32:根据所述初始相机参数和过滤后匹配的特征点对,得到优化后的相机参数。
例如,采用Levenberg-Marquardt迭代算法,求解出优化后的相机参数。
另外,在用Levenberg-Marquardt迭代算法进行优化求解时,可能存在优化失败(算法不收敛)的情况。现有技术中并没有给出优化失败的解决方案,而本发明一个实施例中,参见图3,还可以包括:
S33:在优化失败后,采用失败解决方案。失败解决方案如下:
如果相机参数优化失败,将预先确定的理想相机参数作为初始相机参数再次优化;
其中,理想相机参数可以采用如下方式确定:确定用户终端(如移动设备)每层拍摄的图像格式,再平均估算每张图所处的角度,根据该角度得到理想相机参数。
如果所述再次优化失败,且,所述多张图像分为三层,则去除最下面一层的图像,使用上面两层的图像进行优化;
当将理想相机参数作为初始相机参数也优化失败时,可以去除一部分图像对应的特征点对。例如,通常使用手机拍摄时,会拍摄三层图像,本实施例中,可以先去除最底层的图像,优化上面两层的图像,确保上面两层的图像拼接正确。
其中,在优化上面两层的图像时,具体也可以是根据上面两层图像的匹配的特征点对以及初始相机参数进行优化,另外,初始相机参数也可以优先选择本实施例上述根据传感器信息确定的初始相机参数,再选择理想相机参数。
如果使用上面两层的图像优化失败,仅使用中间一层的图像进行优化。
例如,如果上面两层不能被优化,则可以只保证中间一层的图像被优化,确保中间一层图像拼接正确,因此,此时,可以采用中间一层图像对应的特征点对和初始相机参数进行运算,并在运算时,初始相机参数也可以优先采用本实施例上述根据传感器信息确定的初始相机参数,再选择理想相机参数。
本实施例中,通过采用根据传感器信息得到的初始相机参数,可以提高最优解的准确性,从而提高拼接效果,另外,通过采用上述优化失败后的处理方案,可以增强鲁棒性。
S13:对所述相邻图像进行颜色调整,得到颜色调整后的相邻图像。
在手机拍照过程中,每张图片的参数,如曝光及白平衡等,都是重新计算过的。在场景不同区域的光照改变会导致相邻图片的不同曝光度。同时,场景不同部分的不同颜色的物体也能影响白平衡设置,导致相同物体在相邻图片中表现出不同的外观,有些更亮有些更暗。如果没有额外的颜色和光照处理,就会在全景图的重叠区域产生颜色不均,因此就需要在拼接它们之前对原图像进行颜色和光照的补偿。
一个实施例中,参见图4,所述对所述相邻图像进行颜色调整,得到颜色调整后的图像,包括:
S41:确定相邻图像的重叠区域。
两张图像的重叠区域的确定算法可以采用已有技术实现。
S42:根据相邻图像在所述重叠区域内的两组像素点的像素值,确定使得两组像素值的差值最小的校正参数。
例如,相邻图像是A和B,A和B重复区域是D,则可以获取A的D区域内的像素点的像素值,以及,B的D区域内的像素点的像素值,之后可以采用最小二乘算法,计算这两组像素值的差值最小时,A和B分别对应的校正参数,校正参数例如为Gamma校正参数。
S43:采用所述校正参数对相邻图像进行颜色调整,得到颜色调整后的相邻图像。
例如,在确定出A和B的校正参数后,可以分别采用A的校正参数对A进行颜色校正,采用B的校正参数对B进行颜色校正。
本实施例中,通过颜色校正,可以解决全景图颜色不均的情况。
S14:根据所述优化后的相机参数,对所述颜色调整后的相邻图像进行拼接,生成全景图。
在对相邻图像进行相机参数优化以及颜色调整后,可以对相邻图像进行拼接,以生成全景图。
一个实施例中,参见图5,所述根据所述优化后的相机参数,对所述颜色调整后的相邻图像进行拼接,生成全景图,包括:
S51:对所述颜色调整后的相邻图像,进行缩小处理,得到缩小后的图像。
例如,可以将相邻图像中的每张图像缩小为原来的1/8。
S52:根据所述优化后的相机参数,确定所述缩小后的图像内的拼缝。
其中,在图像确定后,根据相邻图像确定每个图像内的拼缝可以采用已有技术。
与现有技术不同的是,现有技术通常采用原始大小的图像,而本实施例通过对图像进行缩小,在缩小后的图片上确定拼缝,实现在低分辨率的图像上确定拼缝,可以降低工作量,节省时间。
S53:根据所述优化后的相机参数以及所述拼缝,生成每张图像对应的掩膜(mask)图,所述掩膜图包括:组成部分和衔接部分。
其中,可以根据优化后的相机参数确定每张图像在全景图的位置,在该位置以及拼缝确定后,可以确定出一张图像的掩膜图,该掩膜图由组成部分和衔接部分组成。
例如,进行图像拼接的四幅图像如图6a所示,这四幅图像对应的mask图可以如图6b所示,其中,组成部分是白色区域,衔接部分是黑色区域。
S54:对相邻图像的所述衔接部分进行多层混合融合,得到融合后的部分,并将所述组成部分和所述融合后的部分组成全景图。
其中,mask图中的组成部分可以直接用作全景图的相应部分,由于衔接部分在相邻图像上存在重叠,因此需要进行处理,本实施例中可以采用已有的多层混合的方式,之后再由组成部分和融合后的部分组成全景图。
另外,需要说明的是,通常mask图的大小是选择为与全景图的大小一致(也就是分辨率一致),从而可以直接由组成部分和融合后的部分组成全景图。
例如,拼接后的全景图可以如图6c所示。
一个实施例中,在将mask图选择与全景图的大小一致时,由于全景图在拼接处理时采用三维形式拼接,在展示时以二维形式展示,那么在展示时可能会存在一个图像的mask图的衔接部分很大,组成部分分散在衔接部分的两端。此时,还可以进行如下处理:
当所述掩膜图的组成部分分为两个部分,且分别位于所述衔接部分的两端时,将所述掩膜图分为两个小图,以便对每个小图分别进行处理。
例如,在后续的多层混合时,以及将组成部分和融合后的部分进行组成时,将每个小图作为独立的一个图像进行处理。
本实施例中,通过将mask图划分为小的图片,可以减少处理时间,例如,每个mask图可以减少1~2秒的时间,而一般会有7~8张mask图,因此,拼接速度可以有较明显的提升。
本发明的上述实施例可以具体应用在手机的图像拼接中,可以理解的是,本发明的上述方案也可以应用到其他移动设备内,例如应用在车载系统的拼接中。进一步的,在车载 等其他场景下,具体参数等可以根据实际情况进行调整。
本发明实施例中,通过确定相邻图像,并对相邻图像进行特征提取,可以满足准确度要求并可以降低特征提取的工作量,可以提高图像拼接速度。并且,本实施例的上述方法可以特别适用于移动设备,通过以上图像的预处理、特征匹配、全局优化、全局颜色调整、及多层混合等改进,能快速高质量地在移动设备上进行多层照片的全景拼接。根据实验表明,拼接成功率在80%以上,速度在40秒以内。并且,本实施例的方案可以应用到不同种类的用户终端内,提升它们的性能。
图7是本发明另一实施例提出的用于用户终端的全景图像生成装置的结构示意图,该装置可以位于用户终端内,该装置70包括:
匹配模块71,用于获取用户终端拍摄的多张图像,确定多张图像之间的相邻关系,并对相邻图像进行特征匹配,获取匹配的特征点对;
其中,可以采用用户终端的摄像头对周围环境进行拍摄,得到不同状态(如角度,光照等)下的多张图像。
一个实施例中,为了降低运算量,只计算相邻图像之间的匹配关系,而不对不相邻的图像之间的匹配关系进行运算。因此,可以先确定出相邻图像。
可选的,以用户终端是移动设备为例,所述匹配模块71用于从所述多张图像中确定相邻图像,包括:
对应每张图像,获取拍摄所述图像时,设置在所述移动设备内的传感器的信息,根据所述传感器的信息确定相邻图像。
其中,传感器例如陀螺仪、地磁仪或者重力感应器等。
例如,在每次拍摄一张图像时,记录该图像对应的传感器的信息,之后可以根据该信息确定出相邻图像。例如,第一图像的角度是第一角度,第二图像的角度是第二角度,第三图像的角度是第三角度,假设第一角度与第二角度的差值小于第三角度与第一角度的差值,则可以确定第一图像与第二图像相邻。当然,可以理解的是,上述确定相邻图像的方式只是一种简化示例,还可以采用其他方式实现。
本实施例中,通过采用移动设备内的装置可以充分利用移动设备自身资源,方便快捷的确定出相邻图像。
在确定出相邻图像后,可以对相邻图像中的每张图像分别进行特征提取,以完成特征匹配。
可选的,所述匹配模块71用于对相邻图像进行特征匹配,获取匹配的特征点对,包括:
将相邻图像划分为预设个数的区域,并在每个区域内,提取个数小于预设值的特征点;
例如,可以将相邻图像中的每张图像分为四个区域,每个区域提取限定个数的特征点。
特征点可以具体是SIFT特征点。
根据相邻图像内提取的特征点,进行特征匹配,得到匹配的特征点对。
例如,采用随机抽样一致(Random Sample Consensus,RANSAC)算法,对相邻图像中的SIFT特征点进行匹配,得到匹配的特征点对。
本实施例中,通过限定特征点的个数,可以降低运算量,从而提高拼接速度。
一个实施例中,该匹配模块还用于在特征点提取之前可以先对图像进行预处理,和/或,在特征点对匹配之后再删除一些匹配错误的特征点对。具体内容可以参见方法实施例中的相关描述,在此不再赘述。
优化模块72,用于根据所述匹配的特征点对和初始相机参数,得到优化后的相机参数;
其中,可以采用光束法平差(bundle adjustment)算法进行全局相机参数优化。
bundle adjustment算法是一个非线性方程,通过使得所有图像之间匹配的特征点对的投影差值最小,而优化相机参数。
bundle adjustment算法使用Levenberg-Marquardt迭代算法进行求解。非线性优化对初始值非常敏感,如果给定的初始值不好,只能求解到局部的最优解,导致拼接之后的全景图有错位或者重影。
一个实施例中,参见图8,该装置70还包括:用于确定所述初始值的确定模块75,所述确定模块75具体用于:
获取在拍摄每张图像时,设置在所述移动设备内的传感器的信息,并根据所述信息确定每张图像的初始相机参数。
其中,相机参数可以包括:焦距,旋转矩阵等。
现有技术中,初始相机参数采用随机初始值,通过实验验证,采用本实施例的根据传感器信息得到的初始相机参数可以得到更好的最优解,从而减少拼接重影和错位。
另外,在用Levenberg-Marquardt迭代算法进行优化求解时,可能存在优化失败(算法不收敛)的情况。现有技术中并没有给出优化失败的解决方案。
本发明的一个实施例中,参见图8,该装置70还包括:
处理模块76,用于如果相机参数优化失败,将预先确定的理想相机参数作为初始相机参数再次优化;如果所述再次优化失败,且,图像分为三层,则去除最下面一层的图像,使用上面两层的图像进行优化;如果使用上面两层的图像优化失败,仅使用中间一层的图像进行优化。
其中,理想相机参数可以采用如下方式确定:确定用户终端(如移动设备)每层拍摄的图像格式,再平均估算每张图所处的角度,根据该角度得到理想相机参数。
当将理想相机参数作为初始相机参数也优化失败时,可以去除一部分图像对应的特征 点对。例如,通常使用手机拍摄时,会拍摄三层图像,本实施例中,可以先去除最底层的图像,优化上面两层的图像,确保上面两层的图像拼接正确。
其中,在优化上面两层的图像时,其中,相机参数可以包括:焦距,旋转矩阵等。初始相机参数也可以优先选择本实施例上述根据传感器信息确定的初始值,再选择理想相机参数。例如,如果上面两层不能被优化,则可以只保证中间一层的图像被优化,确保中间一层图像拼接正确,因此,此时,可以采用中间一层图像对应的特征点对和初始相机参数进行运算,并在运算时,初始相机参数也可以优先采用本实施例上述根据传感器信息确定的初始相机参数,再选择理想相机参数。
本实施例中,通过采用根据传感器信息得到的初始相机参数,可以提高最优解的准确性,从而提高拼接效果,另外,通过采用上述优化失败后的处理方案,可以增强鲁棒性。
调整模块73,用于对相邻图像进行颜色调整,得到颜色调整后的相邻图像;
在手机拍照过程中,每张图片的参数,如曝光及白平衡等,都是重新计算过的。在场景不同区域的光照改变会导致相邻图片的不同曝光度。同时,场景不同部分的不同颜色的物体也能影响白平衡设置,导致相同物体在相邻图片中表现出不同的外观,有些更亮有些更暗。如果没有额外的颜色和光照处理,就会在全景图的重叠区域产生颜色不均,因此就需要在拼接它们之前对原图像进行颜色和光照的补偿。
一个实施例中,所述调整模块73具体用于:
确定相邻图像的重叠区域;
两张图像的重叠区域的确定算法可以采用已有技术实现。
根据相邻图像在所述重叠区域内的两组像素点的像素值,确定使得两组像素值的差值最小的校正参数;
例如,相邻图像是A和B,A和B重复区域是D,则可以获取A的D区域内的像素点的像素值,以及,B的D区域内的像素点的像素值,之后可以采用最小二乘算法,计算这两组像素值的差值最小时,A和B分别对应的校正参数,校正参数例如为Gamma校正参数。
采用所述校正参数对相邻图像进行颜色调整,得到颜色调整后的相邻图像。
例如,在确定出A和B的校正参数后,可以分别采用A的校正参数对A进行颜色校正,采用B的校正参数对B进行颜色校正。
本实施例中,通过颜色校正,可以解决全景图颜色不均的情况。
拼接模块74,用于根据所述优化后的相机参数,对所述颜色调整后的相邻图像进行拼接,生成全景图。
在对相邻图像进行相机参数优化以及颜色调整后,可以对相邻图像进行拼接,以生成全景图。
一个实施例中,所述拼接模块74具体用于:
对所述颜色调整后的相邻图像,进行缩小处理,得到缩小后的图像;
例如,可以将相邻图像中的每张图像缩小为原来的1/8。
根据所述优化后的相机参数,确定所述缩小后的图像内的拼缝;
其中,在图像确定后,根据相邻图像确定每个图像内的拼缝可以采用已有技术。
与现有技术不同的是,现有技术通常采用原始大小的图像,而本实施例通过对图像进行缩小,在缩小后的图片上确定拼缝,实现在低分辨率的图像上确定拼缝,可以降低工作量,节省时间。
根据所述优化后的相机参数以及所述拼缝,生成每张图像对应的掩膜图,所述掩膜图包括:组成部分和衔接部分;
其中,可以根据优化后的相机参数确定每张图像在全景图的位置,在该位置以及拼缝确定后,可以确定出一张图像的掩膜图,该掩膜图由组成部分和衔接部分组成。
例如,进行图像拼接的四幅图像如图6a所示,这四幅图像对应的mask图可以如图6b所示,其中,组成部分是白色区域,衔接部分是黑色区域。
对相邻图像的所述衔接部分进行多层混合融合,得到融合后的部分,并将所述组成部分和所述融合后的部分组成全景图。
其中,mask图中的组成部分可以直接用作全景图的相应部分,由于衔接部分在相邻图像上存在重叠,因此需要进行处理,本实施例中可以采用已有的多层混合的方式,之后再由组成部分和融合后的部分组成全景图。
另外,需要说明的是,通常mask图的大小是选择为与全景图的大小一致(也就是分辨率一致),从而可以直接由组成部分和融合后的部分组成全景图。
例如,拼接后的全景图可以如图6c所示。
一个实施例中,所述拼接模块74还用于:
当所述掩膜图的组成部分分为两个部分,且分别位于所述衔接部分的两端时,将所述掩膜图分为两个小图,以便对每个小图分别进行处理。
例如,在后续的多层混合时,以及将组成部分和融合后的部分进行组成时,将每个小图作为独立的一个图像进行处理。
本实施例中,通过将mask图划分为小的图片,可以减少处理时间,例如,每个mask图可以减少1~2秒的时间,而一般会有7~8张mask图,因此,拼接速度可以有较明显的提升。
本发明的上述实施例可以具体应用在手机的图像拼接中,可以理解的是,本发明的上述方案也可以应用到其他用户终端内,例如应用在车载系统的拼接中。进一步的,在车载 等其他场景下,具体参数等可以根据实际情况进行调整。
本发明实施例中,通过确定相邻图像,并对相邻图像进行特征提取,可以满足准确度要求并可以降低特征提取的工作量,可以提高图像拼接速度。并且,本实施例的上述方法可以特别适用于移动设备,通过以上图像的预处理、特征匹配、全局优化、全局颜色调整、及多层混合等改进,能快速高质量地在移动设备上进行多层照片的全景拼接。根据实验表明,拼接成功率在80%以上,速度在40秒以内。并且,本实施例的方案可以应用到不同种类的用户终端内,提升它们的性能。
本发明实施例还提出了一种用户终端,包括:一个或者多个处理器;存储器;一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时:执行如本发明第一方面实施例任一项所述的方法。
本发明实施例还提出了一种非易失性计算机存储介质,所述计算机存储介质存储有一个或者多个模块,当所述一个或者多个模块被执行时:执行如本发明第一方面实施例任一项所述的方法。
需要说明的是,在本发明的描述中,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。此外,在本发明的描述中,除非另有说明,“多个”的含义是指至少两个。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既 可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (14)

  1. 一种用于用户终端的全景图像生成方法,其特征在于,包括:
    获取用户终端拍摄的多张图像,确定多张图像之间的相邻关系,并对相邻图像进行特征匹配,获取匹配的特征点对;
    根据所述匹配的特征点对和初始相机参数,得到优化后的相机参数;
    对相邻图像进行颜色调整,得到颜色调整后的相邻图像;
    根据所述优化后的相机参数,对所述颜色调整后的相邻图像进行拼接,生成全景图。
  2. 根据权利要求1所述的方法,其特征在于,所述对相邻图像进行特征匹配,获取匹配的特征点对,包括:
    将相邻图像划分为预设个数的区域,并在每个区域内,提取个数小于预设值的特征点;
    根据相邻图像内提取的特征点,进行特征匹配,得到匹配的特征点对。
  3. 根据权利要求1或2所述的方法,其特征在于,还包括:
    如果相机参数优化失败,将预先确定的理想相机参数作为初始相机参数再次优化;
    如果所述再次优化失败,且,图像分为三层,则去除最下面一层的图像,使用上面两层的图像进行优化;
    如果使用上面两层的图像优化失败,仅使用中间一层的图像进行优化。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述对相邻图像进行颜色调整,得到颜色调整后的相邻图像,包括:
    确定相邻图像的重叠区域;
    根据相邻图像在所述重叠区域内的两组像素点的像素值,确定使得两组像素值的差值最小的校正参数;
    采用所述校正参数对相邻图像进行颜色调整,得到颜色调整后的相邻图像。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述根据所述优化后的相机参数,对所述颜色调整后的相邻图像进行拼接,生成全景图,包括:
    对所述颜色调整后的相邻图像进行缩小处理,得到缩小后的图像;
    根据所述优化后的相机参数,确定所述缩小后的图像内的拼缝;
    根据所述优化后的相机参数以及所述拼缝,生成每张图像对应的掩膜图,所述掩膜图包括:组成部分和衔接部分;
    对相邻图像的所述衔接部分进行多层混合融合,得到融合后的部分,并将所述组成部分和所述融合后的部分组成全景图。
  6. 根据权利要求5所述的方法,其特征在于,还包括:
    当所述掩膜图的组成部分分为两个部分,且分别位于所述衔接部分的两端时,将所述掩膜图分为两个小图,以便对每个小图分别进行处理。
  7. 一种用于用户终端的全景图像生成装置,其特征在于,包括:
    匹配模块,用于获取用户终端拍摄的多张图像,确定多张图像之间的相邻关系,并对相邻图像进行特征匹配,获取匹配的特征点对;
    优化模块,用于根据所述匹配的特征点对和初始相机参数,得到优化后的相机参数;
    调整模块,用于对相邻图像进行颜色调整,得到颜色调整后的相邻图像;
    拼接模块,用于根据所述优化后的相机参数,对所述颜色调整后的相邻图像进行拼接,生成全景图。
  8. 根据权利要求7所述的装置,其特征在于,所述匹配模块用于对相邻图像进行特征匹配,获取匹配的特征点对,包括:
    将相邻图像划分为预设个数的区域,并在每个区域内,提取个数小于预设值的特征点;
    根据相邻图像内提取的特征点,进行特征匹配,得到匹配的特征点对。
  9. 根据权利要求7或8所述的装置,其特征在于,还包括:处理模块,用于如果相机参数优化失败,将预先确定的理想相机参数作为初始值再次优化;如果所述再次优化失败,且,图像分为三层,则去除最下面一层的图像,使用上面两层的图像进行优化;如果使用上面两层的图像优化失败,仅使用中间一层的图像进行优化。
  10. 根据权利要求7-9任一项所述的装置,其特征在于,所述调整模块具体用于:
    确定相邻图像的重叠区域;
    根据相邻图像在所述重叠区域内的两组像素点的像素值,确定使得两组像素值的差值最小的校正参数;
    采用所述校正参数对相邻图像进行颜色调整,得到颜色调整后的相邻图像。
  11. 根据权利要求7-10任一项所述的装置,其特征在于,所述拼接模块具体用于:
    对所述颜色调整后的相邻图像,进行缩小处理,得到缩小后的图像;
    根据所述优化后的相机参数,确定所述缩小后的图像内的拼缝;
    根据所述优化后的相机参数以及所述拼缝,生成每张图像对应的掩膜图,所述掩膜图包括:组成部分和衔接部分;
    对相邻图像的所述衔接部分进行多层混合融合,得到融合后的部分,并将所述组成部分和所述融合后的部分组成全景图。
  12. 根据权利要求11所述的装置,其特征在于,所述拼接模块还用于:
    当所述掩膜图的组成部分分为两个部分,且分别位于所述衔接部分的两端时,将所述掩膜图分为两个小图,以便对每个小图分别进行处理。
  13. 一种用户终端,其特征在于,包括:
    一个或者多个处理器;
    存储器;
    一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时:
    执行如权利要求1-6任一项所述的方法。
  14. 一种非易失性计算机存储介质,其特征在于,所述计算机存储介质存储有一个或者多个模块,当所述一个或者多个模块被执行时:
    执行如权利要求1-6任一项所述的方法。
PCT/CN2015/095070 2015-06-30 2015-11-19 用于用户终端的全景图像生成方法和装置 WO2017000484A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2017565747A JP6605049B2 (ja) 2015-06-30 2015-11-19 ユーザ端末のパノラマ画像生成方法及び装置
KR1020177031584A KR101956151B1 (ko) 2015-06-30 2015-11-19 사용자 단말기에 이용되는 전경 영상 생성 방법 및 장치
US15/739,801 US10395341B2 (en) 2015-06-30 2015-11-19 Panoramic image generation method and apparatus for user terminal
EP15897011.1A EP3319038A4 (en) 2015-06-30 2015-11-19 METHOD AND APPARATUS FOR PANORAMIC IMAGE GENERATION FOR USER TERMINAL

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510377251.6 2015-06-30
CN201510377251.6A CN104992408B (zh) 2015-06-30 2015-06-30 用于用户终端的全景图像生成方法和装置

Publications (1)

Publication Number Publication Date
WO2017000484A1 true WO2017000484A1 (zh) 2017-01-05

Family

ID=54304216

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/095070 WO2017000484A1 (zh) 2015-06-30 2015-11-19 用于用户终端的全景图像生成方法和装置

Country Status (6)

Country Link
US (1) US10395341B2 (zh)
EP (1) EP3319038A4 (zh)
JP (1) JP6605049B2 (zh)
KR (1) KR101956151B1 (zh)
CN (1) CN104992408B (zh)
WO (1) WO2017000484A1 (zh)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992408B (zh) * 2015-06-30 2018-06-05 百度在线网络技术(北京)有限公司 用于用户终端的全景图像生成方法和装置
KR20170124814A (ko) * 2016-05-03 2017-11-13 삼성전자주식회사 영상 표시 장치 및 그 동작 방법
CN106254844B (zh) * 2016-08-25 2018-05-22 成都易瞳科技有限公司 一种全景拼接颜色校正方法
CN108573470B (zh) * 2017-03-08 2020-10-16 北京大学 图像拼接方法及装置
CN107424179A (zh) * 2017-04-18 2017-12-01 微鲸科技有限公司 一种图像均衡方法及装置
CN108122199A (zh) * 2017-12-19 2018-06-05 歌尔科技有限公司 一种全景相机的原始图像颜色调整方法及装置
CN109978761B (zh) * 2017-12-28 2023-06-27 杭州海康威视系统技术有限公司 一种生成全景图片的方法、装置及电子设备
US10764496B2 (en) * 2018-03-16 2020-09-01 Arcsoft Corporation Limited Fast scan-type panoramic image synthesis method and device
CN108805799B (zh) * 2018-04-20 2021-04-23 平安科技(深圳)有限公司 全景图像合成装置、方法及计算机可读存储介质
TWI696147B (zh) * 2018-09-05 2020-06-11 宅妝股份有限公司 全景圖形成方法及系統
CN111260698B (zh) * 2018-12-03 2024-01-02 北京魔门塔科技有限公司 双目图像特征匹配方法及车载终端
CN111489288B (zh) * 2019-01-28 2023-04-07 北京魔门塔科技有限公司 一种图像的拼接方法和装置
CN110363706B (zh) * 2019-06-26 2023-03-21 杭州电子科技大学 一种大面积桥面图像拼接方法
CN111031243A (zh) * 2019-12-16 2020-04-17 河南铭视科技股份有限公司 一种全景图像的生成方法及装置
CN113012052B (zh) * 2019-12-19 2022-09-20 浙江商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113411488A (zh) * 2020-03-17 2021-09-17 长沙智能驾驶研究院有限公司 全景图像生成方法、装置、存储介质及计算机设备
CN111833250A (zh) * 2020-07-13 2020-10-27 北京爱笔科技有限公司 一种全景图像拼接方法、装置、设备及存储介质
CN112102307B (zh) * 2020-09-25 2023-10-20 杭州海康威视数字技术股份有限公司 全局区域的热度数据确定方法、装置及存储介质
CN113096043B (zh) * 2021-04-09 2023-02-17 杭州睿胜软件有限公司 图像处理方法及装置、电子设备和存储介质
CN116452426B (zh) * 2023-06-16 2023-09-05 广汽埃安新能源汽车股份有限公司 一种全景图拼接方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877140A (zh) * 2009-12-18 2010-11-03 北京邮电大学 一种基于全景图的全景虚拟游方法
CN103226822A (zh) * 2013-05-15 2013-07-31 清华大学 医疗影像拼接方法
JP2014068080A (ja) * 2012-09-24 2014-04-17 Canon Inc 撮像装置及び撮像方法
CN104143182A (zh) * 2014-08-05 2014-11-12 乐视致新电子科技(天津)有限公司 一种全景图拼接方法和终端设备
CN104992408A (zh) * 2015-06-30 2015-10-21 百度在线网络技术(北京)有限公司 用于用户终端的全景图像生成方法和装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3144805B2 (ja) * 1996-06-05 2001-03-12 株式会社 ビー・エム・エル ヒトTh2特異的タンパク質及びこれをコードする遺伝子(B19)並びにこれに関連する形質転換体、組換えベクター及びモノクローナル抗体
JP3408117B2 (ja) * 1997-06-26 2003-05-19 日本電信電話株式会社 カメラ操作推定方法およびカメラ操作推定プログラムを記録した記録媒体
US6639596B1 (en) * 1999-09-20 2003-10-28 Microsoft Corporation Stereo reconstruction from multiperspective panoramas
JP2006113807A (ja) * 2004-10-14 2006-04-27 Canon Inc 多視点画像の画像処理装置および画像処理プログラム
EP2030433B1 (de) 2006-05-29 2018-06-20 HERE Global B.V. Verfahren und anordnung zur behandlung von datensätzen bildgebender sensoren sowie ein entsprechendes computerprogramm und ein entsprechendes computerlesbares speichermedium
US8368720B2 (en) * 2006-12-13 2013-02-05 Adobe Systems Incorporated Method and apparatus for layer-based panorama adjustment and editing
GB0625455D0 (en) * 2006-12-20 2007-01-31 Mitsubishi Electric Inf Tech Graph-based multiple panorama extraction from unordered image sets
US10080006B2 (en) * 2009-12-11 2018-09-18 Fotonation Limited Stereoscopic (3D) panorama creation on handheld device
CN103793891A (zh) 2012-10-26 2014-05-14 海法科技有限公司 低复杂度的全景影像接合方法
JP6091172B2 (ja) * 2012-11-15 2017-03-08 オリンパス株式会社 特徴点検出装置およびプログラム
US9286656B2 (en) * 2012-12-20 2016-03-15 Chung-Ang University Industry-Academy Cooperation Foundation Homography estimation apparatus and method
US10096114B1 (en) * 2013-11-27 2018-10-09 Google Llc Determining multiple camera positions from multiple videos
US10249058B2 (en) * 2014-12-24 2019-04-02 Panasonic Intellectual Property Management Co., Ltd. Three-dimensional information restoration device, three-dimensional information restoration system, and three-dimensional information restoration method
JP6976733B2 (ja) * 2017-06-14 2021-12-08 キヤノン株式会社 画像処理装置、画像処理方法、およびプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877140A (zh) * 2009-12-18 2010-11-03 北京邮电大学 一种基于全景图的全景虚拟游方法
JP2014068080A (ja) * 2012-09-24 2014-04-17 Canon Inc 撮像装置及び撮像方法
CN103226822A (zh) * 2013-05-15 2013-07-31 清华大学 医疗影像拼接方法
CN104143182A (zh) * 2014-08-05 2014-11-12 乐视致新电子科技(天津)有限公司 一种全景图拼接方法和终端设备
CN104992408A (zh) * 2015-06-30 2015-10-21 百度在线网络技术(北京)有限公司 用于用户终端的全景图像生成方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3319038A4 *

Also Published As

Publication number Publication date
CN104992408B (zh) 2018-06-05
CN104992408A (zh) 2015-10-21
US10395341B2 (en) 2019-08-27
KR101956151B1 (ko) 2019-03-08
JP2018524710A (ja) 2018-08-30
KR20170131694A (ko) 2017-11-29
EP3319038A4 (en) 2019-01-23
EP3319038A1 (en) 2018-05-09
US20180365803A1 (en) 2018-12-20
JP6605049B2 (ja) 2019-11-13

Similar Documents

Publication Publication Date Title
WO2017000484A1 (zh) 用于用户终端的全景图像生成方法和装置
US11115638B2 (en) Stereoscopic (3D) panorama creation on handheld device
WO2018201809A1 (zh) 基于双摄像头的图像处理装置及方法
US9591237B2 (en) Automated generation of panning shots
US8294748B2 (en) Panorama imaging using a blending map
JP4760973B2 (ja) 撮像装置及び画像処理方法
US20110141226A1 (en) Panorama imaging based on a lo-res map
US20120019613A1 (en) Dynamically Variable Stereo Base for (3D) Panorama Creation on Handheld Device
EP2545411B1 (en) Panorama imaging
US20120019614A1 (en) Variable Stereo Base for (3D) Panorama Creation on Handheld Device
US20110141224A1 (en) Panorama Imaging Using Lo-Res Images
US20110141229A1 (en) Panorama imaging using super-resolution
US20110141225A1 (en) Panorama Imaging Based on Low-Res Images
WO2017113504A1 (zh) 一种图像显示方法以及装置
JP2013005439A (ja) 映像処理装置及び方法
WO2016011758A1 (zh) 图像处理方法和图像处理装置
US20130076941A1 (en) Systems And Methods For Editing Digital Photos Using Surrounding Context
KR101204888B1 (ko) 디지털 촬영장치, 그 제어방법 및 이를 실행시키기 위한프로그램을 저장한 기록매체
CN114390262A (zh) 用于拼接三维球面全景影像的方法及电子装置
US7990412B2 (en) Systems and methods for correcting image perspective
JP4148817B2 (ja) パノラマ画像撮影装置及びパノラマ画像撮影方法
JP4525841B2 (ja) 画像合成装置、コンピュータプログラムおよび記録媒体
JP4458720B2 (ja) 画像入力装置およびプログラム
JP5354059B2 (ja) 撮像装置、画像処理方法及びプログラム
JP2012022162A (ja) 投影制御システム、投影制御装置およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15897011

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20177031584

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017565747

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2015897011

Country of ref document: EP