WO2018090455A1 - 一种终端全景图像处理方法、装置及终端 - Google Patents

一种终端全景图像处理方法、装置及终端 Download PDF

Info

Publication number
WO2018090455A1
WO2018090455A1 PCT/CN2016/112775 CN2016112775W WO2018090455A1 WO 2018090455 A1 WO2018090455 A1 WO 2018090455A1 CN 2016112775 W CN2016112775 W CN 2016112775W WO 2018090455 A1 WO2018090455 A1 WO 2018090455A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
image
depth information
overlapping area
field
Prior art date
Application number
PCT/CN2016/112775
Other languages
English (en)
French (fr)
Inventor
闫明
Original Assignee
宇龙计算机通信科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宇龙计算机通信科技(深圳)有限公司 filed Critical 宇龙计算机通信科技(深圳)有限公司
Publication of WO2018090455A1 publication Critical patent/WO2018090455A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of terminal technologies, and in particular, to a terminal panoramic image processing method, apparatus, and terminal.
  • the mobile phone is panned for shooting, in the process of panning, the user needs to smoothly move the mobile phone horizontally, and the arrow mark is accurately moved to the framing frame. In the place where the next image is taken, the two images are stitched together, and after rotating 180 degrees, a panoramic image is obtained.
  • the panoramic image captured by the single camera has no background blur, and cannot reflect the stereoscopic effect and the layering.
  • the technical problem to be solved by the present invention is to provide a method, a device and a terminal for processing a terminal panoramic image.
  • a panoramic image captured by a single camera has no background blur, and cannot reflect a sense of three-dimensionality and layering.
  • the panoramic image obtained after splicing is unnatural.
  • the present invention provides a terminal panoramic image processing method, including:
  • the depth information value is used to perform depth-of-field smoothing on the image of the overlapping area, and the processed image is panoramicly stitched.
  • the depth of field information is used to adjust the depth of field of the image of the overlapping area.
  • Depth of field adjustment is performed on the corresponding iso-molecular image using the depth information values of the respective molecular images.
  • determining a scene overlapping area of the two-frame depth image includes:
  • Adjacent two frames of depth image are moved in opposite directions to determine a pre-overlapping area
  • the pre-overlapping area is determined as the scene overlapping area.
  • the similarity index for calculating the pre-overlapping region includes:
  • the distance is taken as the similarity index.
  • the present invention provides a terminal panoramic image processing apparatus, including:
  • Obtaining a module configured to acquire adjacent two frames of depth of field images by panoramic shooting, and obtain corresponding depth information values
  • a determining module configured to determine a scene overlapping area of the two-frame depth image and obtain an image of the overlapping area
  • the depth of field smoothing processing module is configured to perform depth of field smoothing on the image of the overlapping area by using the depth information value
  • the panoramic splicing module is configured to perform panoramic splicing of the image processed by the depth of field smoothing processing module.
  • a first calculation sub-module configured to calculate a depth information mean value for depth information values respectively corresponding to adjacent two-frame depth images
  • the first depth of field adjustment sub-module is configured to perform depth-of-field adjustment on the image of the overlapping area by using the depth information mean.
  • Dividing a sub-module configured to vertically divide an image of the overlapping area into at least a second-order molecular image
  • a second calculation sub-module configured to calculate depth information and values for depth information values respectively corresponding to adjacent two-frame depth images
  • the first determining sub-module is configured to determine, according to the depth information and the value and the depth information value corresponding to the adjacent two-frame depth image, respectively, the depth information values of the respective molecular images, so that the depth information difference values of the adjacent equal molecular images are equal;
  • the second depth of field adjustment submodule is configured to perform depth of field adjustment on the corresponding equal molecular image by using depth information values of the respective molecular images.
  • the determining module comprises:
  • a second determining sub-module configured to move adjacent two-frame depth images to each other to determine a pre-overlapping area
  • a third calculation sub-module configured to calculate a similarity index of the pre-overlapping region
  • the third determining sub-module is configured to determine the pre-overlapping area as the scene overlapping area when the similarity index satisfies the preset standard index.
  • the third calculation sub-module is specifically configured to calculate a gray value distribution histogram of the pre-overlapping area image; calculate the distance of the two gray image by using the Euclidean distance calculation algorithm; and use the distance as the similarity index.
  • the present invention provides a terminal including the above-described terminal panoramic image processing apparatus.
  • a terminal panoramic image processing method, apparatus, and terminal includes: acquiring adjacent two-frame depth images by panoramic shooting, and acquiring respectively corresponding depth information values; determining two frames of depth of field Image overlapping area of the image, and obtaining an image of the overlapping area; using the depth information value to perform depth-of-field smoothing on the image of the overlapping area, and performing panoramic stitching on the processed image; using the above scheme, using the adjacent two frames of depth image respectively
  • the depth information value performs depth-of-field processing on the image of the overlapping area, so that the obtained panoramic image has the background blur effect, the stereoscopic effect is strong, the layering is strong, the scene is prominent, the stitching transition is natural, and the visual effect is good.
  • FIG. 1 is a flowchart of a terminal panoramic image processing method according to Embodiment 1 of the present invention.
  • FIG. 2 is a schematic diagram of a terminal panoramic image processing apparatus according to Embodiment 2 of the present invention.
  • FIG. 3 is a schematic diagram of another terminal panoramic image processing apparatus according to Embodiment 2 of the present invention.
  • FIG. 4 is a schematic diagram of a terminal according to Embodiment 3 of the present invention.
  • FIG. 5 is a schematic diagram of another terminal according to Embodiment 4 of the present invention.
  • FIG. 1 is a flowchart of a method for processing a terminal panoramic image according to an embodiment of the present disclosure.
  • the terminal panoramic image processing method includes the following steps:
  • S11 Obtain two adjacent frames of depth of field images by panoramic shooting, and obtain corresponding depth information values respectively.
  • the first frame scene is simultaneously captured, and the depth of field image of the scene is obtained by using a stereo matching algorithm, which is recorded as the depth of field image A; then the camera is horizontally shifted to the right by 2 cm, and the two cameras are used to simultaneously capture the second frame.
  • the scene is obtained by using a stereo matching algorithm to obtain a depth of field image of the scene, which is recorded as a depth of field image B.
  • the depth of field image B is an image obtained by shifting the depth of field image A by a small distance. Therefore, most of the scenes of the depth of field image B and the depth of field image A overlap.
  • the stereo matching algorithm is the process of establishing a correspondence between the matching primitives of the two images, which is the key of the binocular stereo vision system.
  • any computer vision system contains a matching algorithm as its core, so it is extremely important to study the matching algorithm.
  • the matching algorithm of binocular stereo vision it is possible to extend the matching algorithm of binocular stereo vision to a more general case to analyze: assuming two images of the same environment, the two images may be due to the time, orientation or There are different ways, such as the two images captured by the binocular stereo vision system, maps and remote sensing or aerial survey images. If you want to find the corresponding parts, there are generally two ways to consider: (1) Grayscale Correlation of distribution; (2) similarity of feature distribution. Thus there are two types of algorithms: (1) intensity based algorithms; (2) feature based. According to the control strategy, there are the following: (1) coarse-to-fine (hierarchical); (2) constraints (relaxation); (3) Level representation of the multilevel representation.
  • S12 Determine a scene overlapping area of the two-frame depth image, and obtain an image of the overlapping area.
  • the scene overlap area of the two-frame depth image determined in S12 includes:
  • the adjacent two frames of depth image are moved in opposite directions to determine a pre-overlapping area
  • the above-described depth image A is moved to the right, and the depth image B is moved to the left, and the depth of field image A and the depth image B are equal in amplitude during the movement.
  • the similarity index for calculating the pre-overlapping region includes:
  • the pre-overlapping area is determined as the scene overlapping area.
  • the depth of field image A and the depth of field image B are continuously moved until the similarity index of the pre-overlapping region satisfies the preset standard index.
  • the depth of field image B may also be shifted from the leftmost position of the depth image A to the right. If the similarity index of the pre-overlapping region does not satisfy the preset standard index, the depth image B is continuously moved. Until the similarity index meets the preset standard index.
  • S13 Perform depth-of-field smoothing on the image of the overlapping area by using the depth information value, and perform panoramic stitching on the processed image.
  • performing depth-of-field smoothing processing on the image of the overlapping area by using the depth information value in S13 includes:
  • the depth of field information is used to adjust the depth of field of the image of the overlapping area.
  • the depth information value of the depth image A is a
  • the depth information value of the depth image B is b
  • the depth information of the depth image A and the depth image B is (a+b)/2; using (a+b) /2
  • the depth of field adjustment is performed on the image of the overlapping area of the depth of field image A and the depth of field image B.
  • the background image of the overlap region of the depth image A and the depth image B is blurred; when the blur is performed, the median filtering is blurred in the region where the depth information value is larger, and the depth information value is more blurred. Image sharpening in small areas becomes clear.
  • the panoramic image is obtained after the blurring and splicing work of all the captured images is completed.
  • performing depth-of-depth processing on the image of the overlapping area by using the depth information value in S13 includes:
  • Depth of field adjustment is performed on the corresponding iso-molecular image using the depth information values of the respective molecular images.
  • the depth information value of the depth image A is 1 and the depth information value of the depth image B is 5, and the image of the overlapping area of the depth image A and the depth image B is vertically divided into third-order molecular images, which are sequentially recorded from left to right.
  • the molecular image C, the molecular image D, and the like molecular image E are such that the depth information of the adjacent molecular images is equal, so the depth information value of the molecular image C is 2, and the depth information value of the molecular image D is equal. 3.
  • the depth information value of the equal molecular image E is 4, and then the depth information value 2 is used to adjust the depth of field of the equivalent molecular image C, and the depth information value 3 is used to adjust the depth of field of the equivalent molecular image D, and the depth information value is 4
  • the molecular image E is subjected to depth of field adjustment.
  • the panoramic image is obtained after the blurring and splicing work of all the captured images is completed.
  • the depth information value of the adjacent two-frame depth image is used to smooth the depth of field image of the overlapping area, so that the obtained panoramic image has a background blur effect, strong stereoscopic effect, strong layering, and focus of the scene.
  • the stitching transition is natural and the visual effect is good.
  • FIG. 2 is a schematic diagram of a terminal panoramic image processing apparatus according to an embodiment of the present invention.
  • the terminal panoramic image processing apparatus includes: an obtaining module 21, a determining module 22, a depth of field smoothing processing module 23, a panoramic stitching module 24, wherein
  • the obtaining module 21 is configured to acquire adjacent two frames of depth of field images by panoramic shooting, and obtain corresponding depth information values respectively.
  • the first frame scene is simultaneously captured, and the depth of field image of the scene is obtained by using a stereo matching algorithm, which is recorded as the depth of field image A; then the camera is horizontally shifted to the right by 2 cm, and the two cameras are used to simultaneously capture the second frame.
  • the scene is obtained by using a stereo matching algorithm to obtain a depth of field image of the scene, which is recorded as a depth of field image B.
  • the depth of field image B is an image obtained by shifting the depth of field image A by a small distance. Therefore, most of the scenes of the depth of field image B and the depth of field image A overlap.
  • the determining module 22 is configured to determine a scene overlapping area of the two-frame depth image and obtain an image of the overlapping area;
  • the determining module 22 includes:
  • the second determining sub-module 221 is configured to perform adjacent step-by-step moving of the adjacent two frames of depth images to determine a pre-overlapping area
  • the above-described depth image A is moved to the right, and the depth image B is moved to the left, and the depth of field image A and the depth image B are equal in amplitude during the movement.
  • a third calculation sub-module 222 configured to calculate a similarity index of the pre-overlapping region
  • the third calculation sub-module 222 is specifically configured to calculate a histogram of the gray value distribution of the pre-overlapping area image; obtain a distribution histogram from the 0-255 gray value; calculate the distance of the two gray images by using the Euclidean distance calculation algorithm ; use the distance as a similarity index.
  • the third determining sub-module 223 is configured to determine the pre-overlapping area as the scene overlapping area when the similarity index satisfies the preset standard index.
  • the depth of field image A and the depth of field image B are continuously moved until the similarity index of the pre-overlapping region satisfies the preset standard index.
  • the depth of field image B may also be shifted from the leftmost position of the depth image A to the right. If the similarity index of the pre-overlapping region does not satisfy the preset standard index, the depth image B is continuously moved. Until the similarity index meets the preset standard index.
  • the depth of field smoothing processing module 23 is configured to perform depth-of-field smoothing processing on the image of the overlap region using the depth information value.
  • the depth of field smoothing processing module 23 includes:
  • the first calculation sub-module 231 is configured to calculate a depth information mean value for the depth information values respectively corresponding to the adjacent two-frame depth images;
  • the first depth of field adjustment sub-module 232 is configured to perform depth-of-field adjustment on the image of the overlap region using the depth information mean.
  • the depth information value of the depth image A is a
  • the depth information value of the depth image B is b
  • the depth information of the depth image A and the depth image B is (a+b)/2; using (a+b) /2
  • the depth of field adjustment is performed on the image of the overlapping area of the depth of field image A and the depth of field image B.
  • the background image of the overlap region of the depth image A and the depth image B is blurred; when the blur is performed, the median filtering is blurred in the region where the depth information value is larger, and the depth information value is more blurred. Image sharpening in small areas becomes clear.
  • the panoramic image is obtained after the blurring and splicing work of all the captured images is completed.
  • FIG. 3 is a schematic diagram of another terminal panoramic image processing apparatus according to the embodiment, where the depth of field smoothing processing module 23 includes:
  • a dividing sub-module 233 configured to longitudinally divide an image of the overlapping area into at least a second-order molecular image
  • the second calculation sub-module 234 is configured to calculate depth information and values for depth information values respectively corresponding to adjacent two-frame depth images;
  • the first determining sub-module 235 is configured to determine depth information values of the respective molecular images according to the depth information and the value and the depth information values respectively corresponding to the adjacent two-frame depth images, so that the depth information differences of the adjacent equal molecular images are equal ;
  • the second depth of field adjustment sub-module 236 is configured to perform depth-of-field adjustment on the corresponding iso-molecular image using the depth information values of the respective molecular images.
  • the depth information value of the depth of field image A is 1, the depth information value of the depth image B is 5, and the image of the overlapping area of the depth image A and the depth image B is vertically divided into three-dimensional images, which are sequentially recorded from left to right.
  • the molecular image C, the molecular image D, and the like molecular image E are such that the depth information of the adjacent molecular images is equal, so the depth information value of the molecular image C is 2, and the depth information value of the molecular image D is 3.
  • the depth information value of the molecular image E is equal to 4, and then the depth information value 2 is used to adjust the depth of field image C, and the depth information value 3 is used to adjust the depth of field image D, and the depth information value 4 is equivalent.
  • Image E performs depth adjustment.
  • the panoramic image is obtained after the blurring and splicing work of all the captured images is completed.
  • the panoramic splicing module 24 is configured to perform panoramic splicing of the image processed by the depth of field smoothing processing module 23.
  • the depth information value of the adjacent two-frame depth image is used to smooth the depth of field image of the overlapping area, so that the obtained panoramic image has a background blur effect, strong stereoscopic effect, strong layering, and focus of the scene.
  • the stitching transition is natural and the visual effect is good.
  • FIG. 4 is a schematic diagram of a terminal according to the embodiment.
  • the terminal includes the terminal panoramic image processing apparatus in the second embodiment.
  • the terminal acquires adjacent two-frame depth images by panoramic shooting, and acquires corresponding depth information values.
  • the first frame scene is simultaneously captured, and the depth of field image of the scene is obtained by using a stereo matching algorithm, which is recorded as the depth of field image A; then the camera is horizontally shifted to the right by 2 cm, and the two cameras are used to simultaneously capture the second frame.
  • the scene is obtained by using a stereo matching algorithm to obtain a depth of field image of the scene, which is recorded as a depth of field image B.
  • the depth of field image B is an image obtained by shifting the depth of field image A by a small distance. Therefore, most of the scenes of the depth of field image B and the depth of field image A overlap.
  • the terminal determines a scene overlapping area of the two-frame depth image and obtains an image of the overlapping area.
  • determining a scene overlapping area of the two-frame depth image includes:
  • the adjacent two frames of depth image are moved in opposite directions to determine a pre-overlapping area
  • the above-described depth image A is moved to the right, and the depth image B is moved to the left, and the depth of field image A and the depth image B are equal in amplitude during the movement.
  • the similarity index for calculating the pre-overlapping region includes:
  • the pre-overlapping area is determined as the scene overlapping area.
  • the depth of field image A and the depth of field image B are continuously moved until the similarity index of the pre-overlapping region satisfies the preset standard index.
  • the depth of field image B may also be shifted from the leftmost position of the depth image A to the right. If the similarity index of the pre-overlapping region does not satisfy the preset standard index, the depth image B is continuously moved. Until the similarity index meets the preset standard index.
  • the terminal performs depth-of-field smoothing processing on the image of the overlapping area by using the depth information value, and performs panoramic stitching on the processed image.
  • performing depth-of-field smoothing on the image of the overlap region using the depth information value includes:
  • the depth of field information is used to adjust the depth of field of the image of the overlapping area.
  • the depth information value of the depth image A is a
  • the depth information value of the depth image B is b
  • the depth information of the depth image A and the depth image B is (a+b)/2; using (a+b) /2
  • the depth of field adjustment is performed on the image of the overlapping area of the depth of field image A and the depth of field image B.
  • the background image of the overlap region of the depth image A and the depth image B is blurred; when the blur is performed, the median filtering is blurred in the region where the depth information value is larger, and the depth information value is more blurred. Image sharpening in small areas becomes clear.
  • the panoramic image is obtained after the blurring and splicing work of all the captured images is completed.
  • performing depth-of-field smoothing on the image of the overlap region using the depth information value includes:
  • Depth of field adjustment is performed on the corresponding iso-molecular image using the depth information values of the respective molecular images.
  • the depth information value of the depth of field image A is 1, the depth information value of the depth image B is 5, and the image of the overlapping area of the depth image A and the depth image B is vertically divided into three-dimensional images, which are sequentially recorded from left to right.
  • the molecular image C, the molecular image D, and the like molecular image E are such that the depth information of the adjacent molecular images is equal, so the depth information value of the molecular image C is 2, and the depth information value of the molecular image D is 3.
  • the depth information value of the molecular image E is equal to 4, and then the depth information value 2 is used to adjust the depth of field image C, and the depth information value 3 is used to adjust the depth of field image D, and the depth information value 4 is equivalent.
  • Image E performs depth adjustment.
  • the panoramic image is obtained after the blurring and splicing work of all the captured images is completed.
  • the depth information value of the adjacent two-frame depth image is used to smooth the depth of field image of the overlapping area, so that the obtained panoramic image has a background blur effect, strong stereoscopic effect, strong layering, and focus of the scene.
  • the stitching transition is natural and the visual effect is good.
  • FIG. 5 A schematic diagram of a terminal; the terminal includes a processor 51, a memory 52;
  • the memory 52 may store a software program or the like for processing and control operations performed by the processor 51, or may temporarily store data (for example, audio data or the like) that has been output or is to be output.
  • the memory 52 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory (SRAM). , read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • ROM read only memory
  • EEPROM electrically erasable programmable read only memory
  • PROM programmable read only memory
  • magnetic memory magnetic disk, optical disk, and the like.
  • the processor 51 typically performs the overall operation of the terminal. For example, the processor 51 performs control and processing related to acquisition of image depth information values, depth of field smoothing processing of images, and the like.
  • a plurality of instructions are stored in the memory 52 to implement the terminal panoramic image processing method in the first embodiment, and the processor 51 executes a plurality of instructions to implement:
  • the first frame scene is simultaneously captured, and the depth of field image of the scene is obtained by using a stereo matching algorithm, which is recorded as the depth of field image A; then the camera is horizontally shifted to the right by 2 cm, and the two cameras are used to simultaneously capture the second frame.
  • the scene is obtained by using a stereo matching algorithm to obtain a depth of field image of the scene, which is recorded as a depth of field image B.
  • the depth of field image B is an image obtained by shifting the depth of field image A by a small distance. Therefore, most of the scenes of the depth of field image B and the depth of field image A overlap.
  • the scene overlap area of the two-frame depth image is determined, and an image of the overlap area is obtained.
  • determining a scene overlapping area of the two-frame depth image includes:
  • the adjacent two frames of depth image are moved in opposite directions to determine a pre-overlapping area
  • the above-described depth image A is moved to the right, and the depth image B is moved to the left, and the depth of field image A and the depth image B are equal in amplitude during the movement.
  • the similarity index for calculating the pre-overlapping region includes:
  • the pre-overlapping area is determined as the scene overlapping area.
  • the depth of field image A and the depth of field image B are continuously moved until the similarity index of the pre-overlapping region satisfies the preset standard index.
  • the depth of field image B may also be shifted from the leftmost position of the depth image A to the right. If the similarity index of the pre-overlapping region does not satisfy the preset standard index, the depth image B is continuously moved. Until the similarity index meets the preset standard index.
  • the depth information value is used to perform depth-of-field smoothing on the image of the overlapping area, and the processed image is panoramicly stitched.
  • performing depth-of-field smoothing on the image of the overlap region using the depth information value includes:
  • the depth of field information is used to adjust the depth of field of the image of the overlapping area.
  • the depth information value of the depth image A is a
  • the depth information value of the depth image B is b
  • the depth information of the depth image A and the depth image B is (a+b)/2; using (a+b) /2
  • the depth of field adjustment is performed on the image of the overlapping area of the depth of field image A and the depth of field image B.
  • the background image of the overlap region of the depth image A and the depth image B is blurred; when the blur is performed, the median filtering is blurred in the region where the depth information value is larger, and the depth information value is more blurred. Image sharpening in small areas becomes clear.
  • the panoramic image is obtained after the blurring and splicing work of all the captured images is completed.
  • performing depth-of-field smoothing on the image of the overlap region using the depth information value includes:
  • Depth of field adjustment is performed on the corresponding iso-molecular image using the depth information values of the respective molecular images.
  • the depth information value of the depth of field image A is 1, the depth information value of the depth image B is 5, and the image of the overlapping area of the depth image A and the depth image B is vertically divided into three-dimensional images, which are sequentially recorded from left to right.
  • the molecular image C, the molecular image D, and the like molecular image E are such that the depth information of the adjacent molecular images is equal, so the depth information value of the molecular image C is 2, and the depth information value of the molecular image D is 3.
  • the depth information value of the molecular image E is equal to 4, and then the depth information value 2 is used to adjust the depth of field image C, and the depth information value 3 is used to adjust the depth of field image D, and the depth information value 4 is equivalent.
  • Image E performs depth adjustment.
  • the panoramic image is obtained after the blurring and splicing work of all the captured images is completed.
  • the depth information value of the adjacent two-frame depth image is used to smooth the depth of field image of the overlapping area, so that the obtained panoramic image has a background blur effect, strong stereoscopic effect, strong layering, and focus of the scene.
  • the stitching transition is natural and the visual effect is good.
  • modules or steps of the above embodiments of the present invention can be implemented by a general computing device, which can be concentrated on a single computing device or distributed among multiple computing devices.
  • they may be implemented by program code executable by the computing device, such that they may be stored in a storage medium (ROM/RAM, disk, optical disk) by a computing device, and in some
  • the steps shown or described may be performed in an order different from that herein, or they may be separately fabricated into individual integrated circuit modules, or a plurality of the modules or steps may be implemented as a single integrated circuit module. Therefore, the invention is not limited to any particular combination of hardware and software.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本发明提供一种终端全景图像处理方法、装置及终端,该终端全景图像处理方法包括:通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值;确定两帧景深图像的场景重叠区域,并得到重叠区域的图像;利用深度信息值对重叠区域的图像进行景深平滑处理,并将处理后的图像进行全景拼接;采用上述方案,利用相邻两帧景深图像分别的深度信息值对重叠区域的图像进行景深平滑处理,使得得到的全景图像具有背景虚化效果,立体感强,层次感强,景物重点突出,拼接过渡自然,视觉效果好。

Description

一种终端全景图像处理方法、装置及终端
本申请要求于2016年11月17日提交中国专利局,申请号为201611020684.7、发明名称为“一种终端全景图像处理方法、装置及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及终端技术领域,尤其涉及一种终端全景图像处理方法、装置及终端。
背景技术
目前,在使用手机进行全景拍摄时,基本上都是采用单摄像头拍摄的方式,将手机平移进行拍摄,在平移的过程中,需要用户平稳地水平移动手机,在准确移动到取景框中箭头标识的地方时,拍摄下一张图像,将前后两张图像拼接,在旋转180度后,得到一张全景图像。
由于单摄像头无法采集图像的深度信息,所以单摄像头拍摄得到的全景图像没有背景虚化,无法体现出立体感与层次感。
如果平移速度过快,会使拼接后的图像过渡不自然,两张图像没有完全融合,产生失真现象。
发明内容
本发明主要解决的技术问题是,提供一种终端全景图像处理方法、装置及终端,解决现有技术中,采用单摄像头拍摄得到的全景图像没有背景虚化,无法体现出立体感与层次感,拼接后得到的全景图像过渡不自然的问题。
为解决上述技术问题,本发明提供一种终端全景图像处理方法,包括:
通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值;
确定两帧景深图像的场景重叠区域,并得到重叠区域的图像;
利用深度信息值对重叠区域的图像进行景深平滑处理,并将处理后的图像进行全景拼接。
其中,利用深度信息值对重叠区域的图像进行景深平滑处理包括:
对相邻的两帧景深图像分别对应的深度信息值计算深度信息均值;
利用深度信息均值对重叠区域的图像进行景深调整。
其中,利用深度信息值对重叠区域的图像进行景深平滑处理包括:
将重叠区域的图像纵向划分成至少二等分子图像;
对相邻的两帧景深图像分别对应的深度信息值计算深度信息和值;
根据深度信息和值和相邻的两帧景深图像分别对应的深度信息值确定各等分子图像的深度信息值,使得相邻等分子图像的深度信息差值相等;
利用各等分子图像的深度信息值对对应的等分子图像进行景深调整。
其中,确定两帧景深图像的场景重叠区域包括:
将相邻的两帧景深图像进行相向步进移动,确定预重叠区域;
计算预重叠区域的相似性指数;
当相似性指数是否满足预设的标准指数时,将预重叠区域确定为场景重叠区域。
其中,计算预重叠区域的相似性指数包括:
计算预重叠区域图像的灰度值分布直方图;
利用欧式距离计算算法计算两个灰度图像的距离;
将距离作为相似性指数。
为解决上述技术问题,本发明提供一种终端全景图像处理装置,包括:
获取模块,设置为通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值;
确定模块,设置为确定两帧景深图像的场景重叠区域,并得到重叠区域的图像;
景深平滑处理模块,设置为利用深度信息值对重叠区域的图像进行景深平滑处理;
全景拼接模块,设置为将景深平滑处理模块处理后的图像进行全景拼接。
其中,景深平滑处理模块包括:
第一计算子模块,设置为对相邻的两帧景深图像分别对应的深度信息值计算深度信息均值;
第一景深调整子模块,设置为利用深度信息均值对重叠区域的图像进行景深调整。
其中,景深平滑处理模块包括:
划分子模块,设置为将重叠区域的图像纵向划分成至少二等分子图像;
第二计算子模块,设置为对相邻的两帧景深图像分别对应的深度信息值计算深度信息和值;
第一确定子模块,设置为根据深度信息和值和相邻的两帧景深图像分别对应的深度信息值确定各等分子图像的深度信息值,使得相邻等分子图像的深度信息差值相等;
第二景深调整子模块,设置为利用各等分子图像的深度信息值对对应的等分子图像进行景深调整。
其中,确定模块包括:
第二确定子模块,设置为将相邻的两帧景深图像进行相向步进移动,确定预重叠区域;
第三计算子模块,设置为计算预重叠区域的相似性指数;
第三确定子模块,设置为当相似性指数是否满足预设的标准指数时,将预重叠区域确定为场景重叠区域。
其中,第三计算子模块具体设置为计算预重叠区域图像的灰度值分布直方图;利用欧式距离计算算法计算两个灰度图像的距离;将距离作为相似性指数。
为解决上述技术问题,本发明提供一种终端,包括上述的终端全景图像处理装置。
根据本发明提供的一种终端全景图像处理方法、装置及终端,该终端全景图像处理方法包括:通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值;确定两帧景深图像的场景重叠区域,并得到重叠区域的图像;利用深度信息值对重叠区域的图像进行景深平滑处理,并将处理后的图像进行全景拼接;采用上述方案,利用相邻两帧景深图像分别的深度信息值对重叠区域的图像进行景深平滑处理,使得得到的全景图像具有背景虚化效果,立体感强,层次感强,景物重点突出,拼接过渡自然,视觉效果好。
附图说明
图1为本发明实施例一提供的一种终端全景图像处理方法的流程图;
图2为本发明实施例二提供的一种终端全景图像处理装置的示意图;
图3为本发明实施例二提供的另一种终端全景图像处理装置的示意图;
图4为本发明实施例三提供的一种终端的示意图;
图5为本发明实施例四提供的另一种终端的示意图。
具体实施方式
应当理解的是,此处所描述的具体实施例仅用于解释本发明,并不用于限定本发明。
下面通过具体实施方式结合附图对本发明作进一步详细说明。
实施例一
本实施例提供一种终端全景图像处理方法,参见图1,图1为本实施例提供的一种终端全景图像处理方法的流程图,该终端全景图像处理方法包括以下步骤:
S11:通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值。
例如通过终端的两颗摄像头同时拍摄第一帧景象,利用立体匹配算法得到该景象的景深图像,记为景深图像A;然后水平向右平移摄像头2cm距离,再次采用两颗摄像头同时拍摄第二帧景象,利用立体匹配算法得到该景象的景深图像,记为景深图像B。
在获取到了景深图像A和景深图像B,还需确定出景深图像A的深度信息值以及景深图像B的深度信息值。
景深图像B是景深图像A平移了一小段距离后得到的图像,因此,景深图像B与景深图像A大部分场景是重叠的。
其中,立体匹配算法就是在两幅图象的匹配基元之间建立对应关系的过程,它是双目立体视觉系统的关键。实际上,任何计算机视觉系统中都包含一个作为其核心的匹配算法,因而对于匹配算法的研究是极为重要的。
为了比较全面地考察匹配算法,不妨将双目立体视觉的匹配算法扩展到更一般的情况来分析:假设给定两幅同一环境的图象,这两幅图象可能由于摄取的时间、方位或方式的不同而有差别,如双目立体视觉系统所摄取的两幅图象、地图与遥感或航测图象等,若要找到彼此对应的部分,一般有两种考虑途径: (1) 灰度分布的相关性;(2) 特征分布的相似性。因而就有两类算法:(1) 基于灰度的算法 (intensity based);(2) 基于特征的算法 (feature based)。如果按照控制策略分,有如下几种:(1) 粗到精多层次结构 (coarse-to-fine,hierarchical);(2) 引入约束条件的松驰法 (constraints,relaxation);(3) 多级表示的决策结构 (multilevel representation)。
S12:确定两帧景深图像的场景重叠区域,并得到重叠区域的图像。
S12中确定两帧景深图像的场景重叠区域包括:
首先将相邻的两帧景深图像进行相向步进移动,确定预重叠区域;
例如将上述的景深图像A向右移动,将景深图像B向左移动,且景深图像A、景深图像B在进行一次移动的过程中,移动的幅度都是相等的。
然后计算预重叠区域的相似性指数;
其中,计算预重叠区域的相似性指数包括:
计算预重叠区域图像的灰度值分布直方图;获取从0-255灰度值的分布直方图;
利用欧式距离计算算法计算两个灰度图像的距离;
将距离作为相似性指数;
最后当相似性指数是否满足预设的标准指数时,将预重叠区域确定为场景重叠区域。
相似性指数不满足预设的标准指数,则继续移动景深图像A和景深图像B,直至预重叠区域的相似性指数满足预设的标准指数。
在另一种实施方式中,也可以将景深图像B从景深图像A最左处开始,往右平移,若预重叠区域的相似性指数不满足预设的标准指数,则继续移动景深图像B,直至相似性指数满足预设的标准指数。
S13:利用深度信息值对重叠区域的图像进行景深平滑处理,并将处理后的图像进行全景拼接。
在一种实施方式中,S13中利用深度信息值对重叠区域的图像进行景深平滑处理包括:
对相邻的两帧景深图像分别对应的深度信息值计算深度信息均值;
利用深度信息均值对重叠区域的图像进行景深调整。
例如上述的景深图像A的深度信息值为a,景深图像B的深度信息值为b,则景深图像A、景深图像B的深度信息均值为(a+b)/2;利用(a+b)/2对景深图像A与景深图像B的重叠区域的图像进行景深调整。
根据(a+b)/2,对景深图像A与景深图像B的重叠区域的图像进行背景虚化;虚化时,深度信息值越大的区域进行中值滤波变得模糊,深度信息值越小的区域进行图像锐化变得清晰。以此类推,完成所有拍摄图像的虚化与拼接工作后得到全景图像。
在另一种实施方式中,S13中利用深度信息值对重叠区域的图像进行景深平滑处理包括:
将重叠区域的图像纵向划分成至少二等分子图像;
对相邻的两帧景深图像分别对应的深度信息值计算深度信息和值;
根据深度信息和值和相邻的两帧景深图像分别对应的深度信息值确定各等分子图像的深度信息值,使得相邻等分子图像的深度信息差值相等;
利用各等分子图像的深度信息值对对应的等分子图像进行景深调整。
例如,景深图像A的深度信息值为1,景深图像B的深度信息值为5,将景深图像A、景深图像B的重叠区域的图像纵向划分成三等分子图像,从左到右依次记为等分子图像C、等分子图像D、等分子图像E,要使相邻等分子图像的深度信息差值相等,所以等分子图像C的深度信息值为2、等分子图像D的深度信息值为3、等分子图像E的深度信息值为4,然后再利用深度信息值2对等分子图像C进行景深调整,利用深度信息值3对等分子图像D进行景深调整,用深度信息值4对等分子图像E进行景深调整。以此类推,完成所有拍摄图像的虚化与拼接工作后得到全景图像。
通过本实施例的实施,利用相邻两帧景深图像分别的深度信息值对重叠区域的图像进行景深平滑处理,使得得到的全景图像具有背景虚化效果,立体感强,层次感强,景物重点突出,拼接过渡自然,视觉效果好。
实施例二
本实施例提供一种终端全景图像处理装置,参见图2,图2为本实施例提供的一种终端全景图像处理装置的示意图,该终端全景图像处理装置包括:获取模块21、确定模块22、景深平滑处理模块23、全景拼接模块24,其中,
获取模块21,设置为通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值。
例如通过终端的两颗摄像头同时拍摄第一帧景象,利用立体匹配算法得到该景象的景深图像,记为景深图像A;然后水平向右平移摄像头2cm距离,再次采用两颗摄像头同时拍摄第二帧景象,利用立体匹配算法得到该景象的景深图像,记为景深图像B。
在获取到了景深图像A和景深图像B,还需确定出景深图像A的深度信息值以及景深图像B的深度信息值。
景深图像B是景深图像A平移了一小段距离后得到的图像,因此,景深图像B与景深图像A大部分场景是重叠的。
确定模块22,设置为确定两帧景深图像的场景重叠区域,并得到重叠区域的图像;
其中,确定模块22包括:
第二确定子模块221,设置为将相邻的两帧景深图像进行相向步进移动,确定预重叠区域;
例如将上述的景深图像A向右移动,将景深图像B向左移动,且景深图像A、景深图像B在进行一次移动的过程中,移动的幅度都是相等的。
第三计算子模块222,设置为计算预重叠区域的相似性指数;
其中,第三计算子模块222具体设置为计算预重叠区域图像的灰度值分布直方图;获取从0-255灰度值的分布直方图;利用欧式距离计算算法计算两个灰度图像的距离;将距离作为相似性指数。
第三确定子模块223,设置为当相似性指数是否满足预设的标准指数时,将预重叠区域确定为场景重叠区域。
相似性指数不满足预设的标准指数,则继续移动景深图像A和景深图像B,直至预重叠区域的相似性指数满足预设的标准指数。
在另一种实施方式中,也可以将景深图像B从景深图像A最左处开始,往右平移,若预重叠区域的相似性指数不满足预设的标准指数,则继续移动景深图像B,直至相似性指数满足预设的标准指数。
景深平滑处理模块23,设置为利用深度信息值对重叠区域的图像进行景深平滑处理。
在一种实施方式中,景深平滑处理模块23包括:
第一计算子模块231,设置为对相邻的两帧景深图像分别对应的深度信息值计算深度信息均值;
第一景深调整子模块232,设置为利用深度信息均值对重叠区域的图像进行景深调整。
例如上述的景深图像A的深度信息值为a,景深图像B的深度信息值为b,则景深图像A、景深图像B的深度信息均值为(a+b)/2;利用(a+b)/2对景深图像A与景深图像B的重叠区域的图像进行景深调整。
根据(a+b)/2,对景深图像A与景深图像B的重叠区域的图像进行背景虚化;虚化时,深度信息值越大的区域进行中值滤波变得模糊,深度信息值越小的区域进行图像锐化变得清晰。以此类推,完成所有拍摄图像的虚化与拼接工作后得到全景图像。
在另一种实施方式中,参见图3,图3为本实施例提供的另一种终端全景图像处理装置的示意图,景深平滑处理模块23包括:
划分子模块233,设置为将重叠区域的图像纵向划分成至少二等分子图像;
第二计算子模块234,设置为对相邻的两帧景深图像分别对应的深度信息值计算深度信息和值;
第一确定子模块235,设置为根据深度信息和值和相邻的两帧景深图像分别对应的深度信息值确定各等分子图像的深度信息值,使得相邻等分子图像的深度信息差值相等;
第二景深调整子模块236,设置为利用各等分子图像的深度信息值对对应的等分子图像进行景深调整。
例如景深图像A的深度信息值为1,景深图像B的深度信息值为5,将景深图像A、景深图像B的重叠区域的图像纵向划分成三等分子图像,从左到右依次记为等分子图像C、等分子图像D、等分子图像E,要使相邻等分子图像的深度信息差值相等,所以等分子图像C的深度信息值为2、等分子图像D的深度信息值为3、等分子图像E的深度信息值为4,然后再利用深度信息值2对等分子图像C进行景深调整,利用深度信息值3对等分子图像D进行景深调整,用深度信息值4对等分子图像E进行景深调整。以此类推,完成所有拍摄图像的虚化与拼接工作后得到全景图像。
全景拼接模块24,设置为将景深平滑处理模块23处理后的图像进行全景拼接。
通过本实施例的实施,利用相邻两帧景深图像分别的深度信息值对重叠区域的图像进行景深平滑处理,使得得到的全景图像具有背景虚化效果,立体感强,层次感强,景物重点突出,拼接过渡自然,视觉效果好。
实施例三
本实施例提供一种终端,参见图4,图4为本实施例提供的一种终端的示意图,该终端包括实施例二中的终端全景图像处理装置。
终端通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值。
例如通过终端的两颗摄像头同时拍摄第一帧景象,利用立体匹配算法得到该景象的景深图像,记为景深图像A;然后水平向右平移摄像头2cm距离,再次采用两颗摄像头同时拍摄第二帧景象,利用立体匹配算法得到该景象的景深图像,记为景深图像B。
在获取到了景深图像A和景深图像B,还需确定出景深图像A的深度信息值以及景深图像B的深度信息值。
景深图像B是景深图像A平移了一小段距离后得到的图像,因此,景深图像B与景深图像A大部分场景是重叠的。
终端确定两帧景深图像的场景重叠区域,并得到重叠区域的图像。
其中,确定两帧景深图像的场景重叠区域包括:
首先将相邻的两帧景深图像进行相向步进移动,确定预重叠区域;
例如将上述的景深图像A向右移动,将景深图像B向左移动,且景深图像A、景深图像B在进行一次移动的过程中,移动的幅度都是相等的。
然后计算预重叠区域的相似性指数;
其中,计算预重叠区域的相似性指数包括:
计算预重叠区域图像的灰度值分布直方图;获取从0-255灰度值的分布直方图;
利用欧式距离计算算法计算两个灰度图像的距离;
将距离作为相似性指数;
最后当相似性指数是否满足预设的标准指数时,将预重叠区域确定为场景重叠区域。
相似性指数不满足预设的标准指数,则继续移动景深图像A和景深图像B,直至预重叠区域的相似性指数满足预设的标准指数。
在另一种实施方式中,也可以将景深图像B从景深图像A最左处开始,往右平移,若预重叠区域的相似性指数不满足预设的标准指数,则继续移动景深图像B,直至相似性指数满足预设的标准指数。
终端利用深度信息值对重叠区域的图像进行景深平滑处理,并将处理后的图像进行全景拼接。
在一种实施方式中,利用深度信息值对重叠区域的图像进行景深平滑处理包括:
对相邻的两帧景深图像分别对应的深度信息值计算深度信息均值;
利用深度信息均值对重叠区域的图像进行景深调整。
例如上述的景深图像A的深度信息值为a,景深图像B的深度信息值为b,则景深图像A、景深图像B的深度信息均值为(a+b)/2;利用(a+b)/2对景深图像A与景深图像B的重叠区域的图像进行景深调整。
根据(a+b)/2,对景深图像A与景深图像B的重叠区域的图像进行背景虚化;虚化时,深度信息值越大的区域进行中值滤波变得模糊,深度信息值越小的区域进行图像锐化变得清晰。以此类推,完成所有拍摄图像的虚化与拼接工作后得到全景图像。
在另一种实施方式中,利用深度信息值对重叠区域的图像进行景深平滑处理包括:
将重叠区域的图像纵向划分成至少二等分子图像;
对相邻的两帧景深图像分别对应的深度信息值计算深度信息和值;
根据深度信息和值和相邻的两帧景深图像分别对应的深度信息值确定各等分子图像的深度信息值,使得相邻等分子图像的深度信息差值相等;
利用各等分子图像的深度信息值对对应的等分子图像进行景深调整。
例如景深图像A的深度信息值为1,景深图像B的深度信息值为5,将景深图像A、景深图像B的重叠区域的图像纵向划分成三等分子图像,从左到右依次记为等分子图像C、等分子图像D、等分子图像E,要使相邻等分子图像的深度信息差值相等,所以等分子图像C的深度信息值为2、等分子图像D的深度信息值为3、等分子图像E的深度信息值为4,然后再利用深度信息值2对等分子图像C进行景深调整,利用深度信息值3对等分子图像D进行景深调整,用深度信息值4对等分子图像E进行景深调整。以此类推,完成所有拍摄图像的虚化与拼接工作后得到全景图像。
通过本实施例的实施,利用相邻两帧景深图像分别的深度信息值对重叠区域的图像进行景深平滑处理,使得得到的全景图像具有背景虚化效果,立体感强,层次感强,景物重点突出,拼接过渡自然,视觉效果好。
实施例四
为了便于更好地实施实施例一中的终端全景图像处理方法,本实施例提供了用于实施实施例一中的终端全景图像处理方法的终端,参见图5,图5为本实施例提供的一种终端的示意图;该终端包括处理器51、存储器52;
存储器52可以存储由处理器51执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,音频数据等)。
存储器52可以包括至少一种类型的存储介质,存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。
处理器51通常执行终端的总体操作。例如处理器51执行图像深度信息值的获取、对图像的景深平滑处理等相关的控制和处理。
存储器52内存储有多个指令以实现实施例一中的终端全景图像处理方法,处理器51执行多个指令以实现:
通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值。
例如通过终端的两颗摄像头同时拍摄第一帧景象,利用立体匹配算法得到该景象的景深图像,记为景深图像A;然后水平向右平移摄像头2cm距离,再次采用两颗摄像头同时拍摄第二帧景象,利用立体匹配算法得到该景象的景深图像,记为景深图像B。
在获取到了景深图像A和景深图像B,还需确定出景深图像A的深度信息值以及景深图像B的深度信息值。
景深图像B是景深图像A平移了一小段距离后得到的图像,因此,景深图像B与景深图像A大部分场景是重叠的。
确定两帧景深图像的场景重叠区域,并得到重叠区域的图像。
其中,确定两帧景深图像的场景重叠区域包括:
首先将相邻的两帧景深图像进行相向步进移动,确定预重叠区域;
例如将上述的景深图像A向右移动,将景深图像B向左移动,且景深图像A、景深图像B在进行一次移动的过程中,移动的幅度都是相等的。
然后计算预重叠区域的相似性指数;
其中,计算预重叠区域的相似性指数包括:
计算预重叠区域图像的灰度值分布直方图;获取从0-255灰度值的分布直方图;
利用欧式距离计算算法计算两个灰度图像的距离;
将距离作为相似性指数;
最后当相似性指数是否满足预设的标准指数时,将预重叠区域确定为场景重叠区域。
相似性指数不满足预设的标准指数,则继续移动景深图像A和景深图像B,直至预重叠区域的相似性指数满足预设的标准指数。
在另一种实施方式中,也可以将景深图像B从景深图像A最左处开始,往右平移,若预重叠区域的相似性指数不满足预设的标准指数,则继续移动景深图像B,直至相似性指数满足预设的标准指数。
利用深度信息值对重叠区域的图像进行景深平滑处理,并将处理后的图像进行全景拼接。
在一种实施方式中,利用深度信息值对重叠区域的图像进行景深平滑处理包括:
对相邻的两帧景深图像分别对应的深度信息值计算深度信息均值;
利用深度信息均值对重叠区域的图像进行景深调整。
例如上述的景深图像A的深度信息值为a,景深图像B的深度信息值为b,则景深图像A、景深图像B的深度信息均值为(a+b)/2;利用(a+b)/2对景深图像A与景深图像B的重叠区域的图像进行景深调整。
根据(a+b)/2,对景深图像A与景深图像B的重叠区域的图像进行背景虚化;虚化时,深度信息值越大的区域进行中值滤波变得模糊,深度信息值越小的区域进行图像锐化变得清晰。以此类推,完成所有拍摄图像的虚化与拼接工作后得到全景图像。
在另一种实施方式中,利用深度信息值对重叠区域的图像进行景深平滑处理包括:
将重叠区域的图像纵向划分成至少二等分子图像;
对相邻的两帧景深图像分别对应的深度信息值计算深度信息和值;
根据深度信息和值和相邻的两帧景深图像分别对应的深度信息值确定各等分子图像的深度信息值,使得相邻等分子图像的深度信息差值相等;
利用各等分子图像的深度信息值对对应的等分子图像进行景深调整。
例如景深图像A的深度信息值为1,景深图像B的深度信息值为5,将景深图像A、景深图像B的重叠区域的图像纵向划分成三等分子图像,从左到右依次记为等分子图像C、等分子图像D、等分子图像E,要使相邻等分子图像的深度信息差值相等,所以等分子图像C的深度信息值为2、等分子图像D的深度信息值为3、等分子图像E的深度信息值为4,然后再利用深度信息值2对等分子图像C进行景深调整,利用深度信息值3对等分子图像D进行景深调整,用深度信息值4对等分子图像E进行景深调整。以此类推,完成所有拍摄图像的虚化与拼接工作后得到全景图像。
通过本实施例的实施,利用相邻两帧景深图像分别的深度信息值对重叠区域的图像进行景深平滑处理,使得得到的全景图像具有背景虚化效果,立体感强,层次感强,景物重点突出,拼接过渡自然,视觉效果好。
显然,本领域的技术人员应该明白,上述本发明实施例的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储介质(ROM/RAM、磁碟、光盘)中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。所以,本发明不限制于任何特定的硬件和软件结合。
以上内容是结合具体的实施方式对本发明实施例所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (11)

  1. 一种终端全景图像处理方法,其特征在于,包括:
    通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值;
    确定所述两帧景深图像的场景重叠区域,并得到重叠区域的图像;
    利用所述深度信息值对所述重叠区域的图像进行景深平滑处理,并将处理后的图像进行全景拼接。
  2. 根据权利要求1所述的终端全景图像处理方法,其特征在于,所述利用所述深度信息值对所述重叠区域的图像进行景深平滑处理包括:
    对所述相邻的两帧景深图像分别对应的深度信息值计算深度信息均值;
    利用所述深度信息均值对所述重叠区域的图像进行景深调整。
  3. 根据权利要求1所述的终端全景图像处理方法,其特征在于,所述利用所述深度信息值对所述重叠区域的图像进行景深平滑处理包括:
    将重叠区域的图像纵向划分成至少二等分子图像;
    对所述相邻的两帧景深图像分别对应的深度信息值计算深度信息和值;
    根据所述深度信息和值和所述相邻的两帧景深图像分别对应的深度信息值确定各等分子图像的深度信息值,使得相邻等分子图像的深度信息差值相等;
    利用各等分子图像的深度信息值对对应的等分子图像进行景深调整。
  4. 根据权利要求1-3任一项所述的终端全景图像处理方法,其特征在于,所述确定所述两帧景深图像的场景重叠区域包括:
    将所述相邻的两帧景深图像进行相向步进移动,确定预重叠区域;
    计算预重叠区域的相似性指数;
    当所述相似性指数是否满足预设的标准指数时,将所述预重叠区域确定为场景重叠区域。
  5. 根据权利要求4所述的终端全景图像处理方法,其特征在于,所述计算预重叠区域的相似性指数包括:
    计算预重叠区域图像的灰度值分布直方图;
    利用欧式距离计算算法计算两个灰度图像的距离;
    将所述距离作为相似性指数。
  6. 一种终端全景图像处理装置,其特征在于,包括:
    获取模块,设置为通过全景拍摄获取相邻的两帧景深图像,并获取分别对应的深度信息值;
    确定模块,设置为确定所述两帧景深图像的场景重叠区域,并得到重叠区域的图像;
    景深平滑处理模块,设置为利用所述深度信息值对所述重叠区域的图像进行景深平滑处理;
    全景拼接模块,设置为将所述景深平滑处理模块处理后的图像进行全景拼接。
  7. 根据权利要求6所述的终端全景图像处理装置,其特征在于,所述景深平滑处理模块包括:
    第一计算子模块,设置为对所述相邻的两帧景深图像分别对应的深度信息值计算深度信息均值;
    第一景深调整子模块,设置为利用所述深度信息均值对所述重叠区域的图像进行景深调整。
  8. 根据权利要求6所述的终端全景图像处理装置,其特征在于,所述景深平滑处理模块包括:
    划分子模块,设置为将重叠区域的图像纵向划分成至少二等分子图像;
    第二计算子模块,设置为对所述相邻的两帧景深图像分别对应的深度信息值计算深度信息和值;
    第一确定子模块,设置为根据所述深度信息和值和所述相邻的两帧景深图像分别对应的深度信息值确定各等分子图像的深度信息值,使得相邻等分子图像的深度信息差值相等;
    第二景深调整子模块,设置为利用各等分子图像的深度信息值对对应的等分子图像进行景深调整。
  9. 根据权利要求6-8任一项所述的终端全景图像处理装置,其特征在于,所述确定模块包括:
    第二确定子模块,设置为将所述相邻的两帧景深图像进行相向步进移动,确定预重叠区域;
    第三计算子模块,设置为计算预重叠区域的相似性指数;
    第三确定子模块,设置为当所述相似性指数是否满足预设的标准指数时,将所述预重叠区域确定为场景重叠区域。
  10. 根据权利要求9所述的终端全景图像处理装置,其特征在于,所述第三计算子模块具体设置为计算预重叠区域图像的灰度值分布直方图;利用欧式距离计算算法计算两个灰度图像的距离;将所述距离作为相似性指数。
  11. 一种终端,其特征在于,包括如权利要求6-10任一项所述的终端全景图像处理装置。
PCT/CN2016/112775 2016-11-17 2016-12-28 一种终端全景图像处理方法、装置及终端 WO2018090455A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611020684.7 2016-11-17
CN201611020684.7A CN106651755A (zh) 2016-11-17 2016-11-17 一种终端全景图像处理方法、装置及终端

Publications (1)

Publication Number Publication Date
WO2018090455A1 true WO2018090455A1 (zh) 2018-05-24

Family

ID=58807574

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/112775 WO2018090455A1 (zh) 2016-11-17 2016-12-28 一种终端全景图像处理方法、装置及终端

Country Status (2)

Country Link
CN (1) CN106651755A (zh)
WO (1) WO2018090455A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876722A (zh) * 2018-06-19 2018-11-23 国网浙江省电力有限公司温州供电公司 一种vr全景视图制作方法
CN109509148A (zh) * 2018-10-12 2019-03-22 广州小鹏汽车科技有限公司 一种全景环视图像拼接融合方法和装置
CN112102307A (zh) * 2020-09-25 2020-12-18 杭州海康威视数字技术股份有限公司 全局区域的热度数据确定方法、装置及存储介质
CN114125296A (zh) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 图像处理方法、装置、电子设备和可读存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108024058B (zh) * 2017-11-30 2019-08-02 Oppo广东移动通信有限公司 图像虚化处理方法、装置、移动终端和存储介质
CN108038825B (zh) * 2017-12-12 2020-08-04 维沃移动通信有限公司 一种图像处理方法及移动终端
CN110278366B (zh) 2018-03-14 2020-12-01 虹软科技股份有限公司 一种全景图像虚化方法、终端及计算机可读存储介质
CN109008942A (zh) * 2018-09-15 2018-12-18 中山大学中山眼科中心 一种基于裂隙灯平台的全眼光学相干断层成像装置及成像方法
CN109104576A (zh) * 2018-10-29 2018-12-28 努比亚技术有限公司 一种全景拍摄方法、穿戴设备及计算机可读存储介质
CN111385461B (zh) * 2018-12-28 2022-08-02 中兴通讯股份有限公司 全景拍摄方法及装置、相机、移动终端
CN110276774B (zh) * 2019-06-26 2021-07-23 Oppo广东移动通信有限公司 物体的绘图方法、装置、终端和计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593350A (zh) * 2008-05-30 2009-12-02 日电(中国)有限公司 深度自适应视频拼接的方法、装置和系统
CN101673395A (zh) * 2008-09-10 2010-03-17 深圳华为通信技术有限公司 图像拼接方法及装置
CN101923709A (zh) * 2009-06-16 2010-12-22 日电(中国)有限公司 图像拼接方法与设备
JP2011259168A (ja) * 2010-06-08 2011-12-22 Fujifilm Corp 立体パノラマ画像撮影装置
CN104519340A (zh) * 2014-12-30 2015-04-15 余俊池 基于多深度图像变换矩阵的全景视频拼接方法
CN106023073A (zh) * 2016-05-06 2016-10-12 安徽伟合电子科技有限公司 一种图像拼接系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574838B (zh) * 2014-10-15 2018-09-14 上海弘视通信技术有限公司 多目相机的图像配准和拼接方法及其装置
CN104318517A (zh) * 2014-11-19 2015-01-28 北京奇虎科技有限公司 一种图像拼接处理方法、装置及客户端
CN105407280B (zh) * 2015-11-11 2019-02-12 Oppo广东移动通信有限公司 全景图像合成方法和系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593350A (zh) * 2008-05-30 2009-12-02 日电(中国)有限公司 深度自适应视频拼接的方法、装置和系统
CN101673395A (zh) * 2008-09-10 2010-03-17 深圳华为通信技术有限公司 图像拼接方法及装置
CN101923709A (zh) * 2009-06-16 2010-12-22 日电(中国)有限公司 图像拼接方法与设备
JP2011259168A (ja) * 2010-06-08 2011-12-22 Fujifilm Corp 立体パノラマ画像撮影装置
CN104519340A (zh) * 2014-12-30 2015-04-15 余俊池 基于多深度图像变换矩阵的全景视频拼接方法
CN106023073A (zh) * 2016-05-06 2016-10-12 安徽伟合电子科技有限公司 一种图像拼接系统

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876722A (zh) * 2018-06-19 2018-11-23 国网浙江省电力有限公司温州供电公司 一种vr全景视图制作方法
CN109509148A (zh) * 2018-10-12 2019-03-22 广州小鹏汽车科技有限公司 一种全景环视图像拼接融合方法和装置
CN109509148B (zh) * 2018-10-12 2023-08-29 广州小鹏汽车科技有限公司 一种全景环视图像拼接融合方法和装置
CN112102307A (zh) * 2020-09-25 2020-12-18 杭州海康威视数字技术股份有限公司 全局区域的热度数据确定方法、装置及存储介质
CN112102307B (zh) * 2020-09-25 2023-10-20 杭州海康威视数字技术股份有限公司 全局区域的热度数据确定方法、装置及存储介质
CN114125296A (zh) * 2021-11-24 2022-03-01 广东维沃软件技术有限公司 图像处理方法、装置、电子设备和可读存储介质

Also Published As

Publication number Publication date
CN106651755A (zh) 2017-05-10

Similar Documents

Publication Publication Date Title
WO2018090455A1 (zh) 一种终端全景图像处理方法、装置及终端
WO2019050360A1 (en) ELECTRONIC DEVICE AND METHOD FOR AUTOMATICALLY SEGMENTING TO BE HUMAN IN AN IMAGE
WO2015016619A1 (en) Electronic apparatus, method of controlling the same, and image reproducing apparatus and method
WO2016032292A1 (en) Photographing method and electronic device
WO2020036311A1 (en) Method and device for generating content
WO2016006946A1 (ko) 증강현실 컨텐츠의 생성 및 재생 시스템과 이를 이용한 방법
WO2015126044A1 (ko) 이미지를 처리하기 위한 방법 및 그 전자 장치
WO2018093100A1 (en) Electronic apparatus and method for processing image thereof
WO2009151292A2 (ko) 영상 변환 방법 및 장치
WO2020054949A1 (en) Electronic device and method for capturing view
WO2017026705A1 (ko) 360도 3d 입체 영상을 생성하는 전자 장치 및 이의 방법
WO2020101420A1 (ko) 증강현실 기기의 광학 특성 측정 방법 및 장치
WO2017090833A1 (en) Photographing device and method of controlling the same
WO2019156428A1 (en) Electronic device and method for correcting images using external electronic device
WO2020076128A1 (en) Method and electronic device for switching between first lens and second lens
WO2023277253A1 (en) Automatic representation switching based on depth camera field of view
WO2013077522A1 (en) Apparatus and method for hierarchical stereo matching
WO2016080653A1 (en) Method and apparatus for image processing
WO2015142137A1 (ko) 전자 장치, 영상 처리 방법, 및 컴퓨터 판독가능 기록매체
WO2016072538A1 (ko) 유저 인터페이스를 통한 카메라 장치의 동작 방법
WO2014178578A1 (en) Apparatus and method for generating image data in portable terminal
WO2014003282A1 (en) Image processing apparatus, image relaying apparatus, method for processing image, and method for relaying image
WO2021133139A1 (en) Electronic apparatus and control method thereof
WO2017034321A1 (ko) 카메라를 구비하는 장치에서 사진 촬영 지원 기법 및 그 장치
WO2019112169A1 (ko) 3d 이미지를 생성하기 위한 전자 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16922020

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16922020

Country of ref document: EP

Kind code of ref document: A1