WO2017016050A1 - 一种图像的预览方法、装置及终端 - Google Patents
一种图像的预览方法、装置及终端 Download PDFInfo
- Publication number
- WO2017016050A1 WO2017016050A1 PCT/CN2015/089066 CN2015089066W WO2017016050A1 WO 2017016050 A1 WO2017016050 A1 WO 2017016050A1 CN 2015089066 W CN2015089066 W CN 2015089066W WO 2017016050 A1 WO2017016050 A1 WO 2017016050A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- image
- preview
- background area
- area
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
Definitions
- more and more terminals such as tablets, smart phones, etc.
- array cameras usually mostly rear dual cameras
- the array camera to obtain several images of the same scene from different angles, in the shooting After the completion, a plurality of images are combined into one image, and the depth of field is lightened, thereby achieving the purpose of background blurring.
- the embodiment of the invention provides a method, a device and a terminal for previewing an image, which are used to provide a solution for real-time previewing the background blur effect of the current scene.
- the terminal outputs an image including the target area and the blurred background area as a real-time preview of the current scene
- the embodiment of the invention further provides an image preview device, the device comprising:
- the acquisition module is configured to acquire a preview image of the current scene from different angles by using two camera modules in the dual camera shooting preview mode;
- a processing module configured to obtain a foreground depth depth Depth of the preset target area in the preview image, and in the same preview image, all the pixel units whose depth of field is outside the range of [Depth-d, Depth+d]
- the area is determined as a background area; the background area is blurred by using a preset blurring coefficient; wherein the d is a preset threshold;
- FIG. 1 is a schematic flowchart of a method for previewing an image according to an embodiment of the present invention
- FIG. 5 is a schematic structural diagram of an image preview apparatus according to an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
- the embodiments of the present invention can be applied to various types of terminal devices, such as smart phones, tablet computers, notebook computers, and the like.
- the embodiments of the present invention are also applicable to terminal devices having an array camera or a camera having a similar principle, for example: A device having an array camera, a terminal having a dual camera, a terminal having a fly-eye camera, and the like.
- the embodiment of the invention provides an effect that the background (ie, the background area) is blurred when previewing the scene, and the degree of the background blur can be adjusted in real time. After the user is satisfied with the blur effect, the photograph is taken, which greatly improves the success rate and creativity of the photograph.
- the background ie, the background area
- Step 12 In the preview image, the terminal calculates the depth of field of each pixel unit included in a preview according to the phase difference generated by the distance and angle between the dual camera modules.
- Step 13 The terminal acquires a foreground depth Depth of the preset target area in the preview image, And in the same preview image, the area composed of all the pixel units whose depth of field depth is outside the range of [Depth-d, Depth+d] is determined as the background area; the background area is blurred by the preset blurring coefficient; , d is the preset threshold.
- Step 14 The terminal outputs an image including the target area and the blurred background area as a real-time preview of the current scene.
- the pixel unit is specifically a pixel point or a pixel block composed of a plurality of pixel points.
- the terminal uses two camera modules to synchronously acquire the original image of the current scene from two different angles; and performs image compression and differential processing on the original image according to a preset scaling ratio and a differential coefficient. , respectively, get a preview of the same size.
- the terminal after receiving the shooting instruction, acquires an original image of the current scene, and the terminal determines, according to the background area determined in the preview image of the current scene, in the acquired original image.
- the corresponding background area is used to blur the background area in the original image by using a preset blurring coefficient; the terminal includes an image of the target area in the original image and the blurred background area in the original image as The image after shooting is output.
- step 13 calculating a difference between a depth of field of each pixel unit included in the background area and a depth of the foreground depth, respectively, and determining, in the correspondence relationship of the fuzzy coefficient generated in advance, corresponding to the difference The fuzzy coefficient; using the determined blur coefficient, the corresponding pixel unit is blurred.
- the target area is set by the following steps: after receiving the focus instruction, the terminal determines the target area according to the size and coordinates of the target area indicated by the focus instruction; or the terminal determines according to the size and coordinates of the target area set by default. Out of the target area.
- the embodiment of the present invention is exemplified by taking a mobile phone having a dual camera (hereinafter may be simply referred to as a mobile phone) as an example.
- the background blur scheme provided by the embodiment of the present invention may include a preview process and a photographing process.
- the mobile phone has a dual camera device, wherein one of the main camera and the other is a secondary camera, and the preview image taken by the main camera is the main preview image, and the preview image captured by the auxiliary camera is the auxiliary preview image.
- any preview image it can be divided into a foreground area and a back scene area, wherein the foreground area is the target area, which refers to the focused area, and the target area can include a focus area; wherein the back scene area is the background area. Refers to areas other than the area being focused.
- the parameter of the main camera may be higher than the parameter of the auxiliary camera.
- the main image output may be preferentially selected; alternatively, the camera with the same parameter may be used as the main camera and the auxiliary camera respectively. , you can choose any image output.
- the main camera module and the auxiliary camera module have a certain distance or angle
- the main preview image and the auxiliary preview image have a certain phase difference
- the phase difference can be used to obtain each phase difference.
- the depth of field of the pixel block and even the individual pixels, and then the background is blurred according to the depth of field of the background area and the preset blurring coefficient.
- the pixel block in the present invention may be a block composed of a preset number of pixel points, for example, a pixel block composed of 32*32 pixels, a pixel block composed of 16*16 pixels, and 8*8 pixels.
- FIG. 2 is a schematic flowchart of performing background blurring in a preview process according to an embodiment of the present invention. As shown in FIG. 2, the process may include:
- Step 21 The terminal acquires a main preview image I i of the current time.
- the original image of the current scene can be synchronously captured by the dual camera of the terminal at a preset time interval, wherein the image captured by the main camera is the main original image and the auxiliary camera.
- the captured image is a secondary original image; and the terminal performs image compression and differential processing on the primary original image and the secondary original image respectively according to a preset scaling ratio and a differential coefficient, and obtains and caches the main preview image Ii and the secondary preview image.
- the first original scaling ratio and the differential coefficient may be used to compress and differentially process the main original image.
- Step 22 The terminal determines whether the depth preview information in the main preview image I i of the current time and the main preview image I i-1 of the previous moment is updated; if yes, step 23 is performed; otherwise, step 24 is performed.
- Step 23 The terminal updates the depth information D[w, h] in the main preview of the current time.
- the depth information D[w, h] can be used as the depth map of the current scene, w can represent the length of the depth map, h can represent the width of the depth map, and w ⁇ h is not larger than the pixel size of the entire image. Further, in the embodiment of the present invention, the size of the depth information D[w, h] may also be scaled according to the scaling ratio of the image.
- the terminal may calculate the depth of field map of the entire image in units of pixel blocks, or may calculate the depth of field map of the entire image in units of pixels.
- the terminal can calculate the depth of field map of the entire image in units of pixel blocks, which can be specifically as follows:
- each pixel block may be composed of n a ⁇ n b , and n a and n b are positive integers.
- the terminal can save the depth map of the main original image for blurring the original image during subsequent shooting, or It can also be used when post processing the original image.
- the embodiment of the present invention can obtain the depth of field map by using the foregoing implementation manner.
- the terminal can calculate the depth of field map of the entire image by using pixel points as follows:
- the same phase difference exists between the main preview image and the auxiliary preview image.
- the depth of field of each pixel point can be separately calculated. Depth, and sequentially arrange depth depths of each pixel in the main preview as the depth map is saved in the attribute information of the main original image corresponding to the main preview or saved to the main original image corresponding to the main preview In the attached file. In this way, the terminal can save the depth map of the main original image for blurring the original image during subsequent shooting, or can also be used for post processing of the original image.
- the embodiment of the present invention can obtain the depth of field map by using the foregoing implementation manner.
- the depth of field map refers to a total collection of depth of depth of each region (ie, a pixel block) or each pixel in the entire image.
- other methods capable of acquiring each region (ie, a pixel block) or the depth of the depth of each pixel in the entire image are obtained. Within the scope of protection of the present application, it will not be repeated here.
- the depth of field map can be output in real time when the dual camera is working.
- the frame may be previewed every 6 to 12 intervals, and the depth of field map may be updated.
- Step 24 The terminal acquires the foreground depth Depth Depth corresponding to the current target area Ia.
- the manner in which the target area is set may be the same as the implementation in the prior art.
- the target area can be manually selected by the user; When there is manual selection, the terminal uses the default target area as the current target area.
- Step 25 The terminal acquires an image region whose depth of field in the main preview image is outside the range of [Depth-d, Depth+d] according to the foreground depth Depth and the depth information D[w, h], and determines the region as the background region. Ib.
- d is a preset threshold for indicating the range of the depth of field of the focus area. Specifically, since a single specified Depth is a specific value, the focus surface is restricted to be too harsh, and the range of complete clarity is too small. Therefore, it is necessary to add a range before and after, so as to make a complete object in the focus plane clear, not partially clear.
- the terminal may determine the area other than the target area Ia as the background area, that is, the image area whose depth of field depth is outside the range of [Depth-d, Depth+d] is determined as the background area Ib.
- the background area Ib may include each area having a different depth of field, that is, the background area Ib may be composed of pixel blocks or pixels having different depth of field depths.
- Step 26 The terminal determines whether the blurring coefficient m i needs to be updated. If yes, step 27 is performed; otherwise, step 28 is performed.
- Step 27 The terminal updates the blurring coefficient m i .
- the degree of blurring is not the same for all regions except [Depth-d, Depth+d], but the closer to the Depth (ie, the target Depth) of the target region is blurred. The smaller the degree, the farther the blur from the depth of the target area (ie, the target Depth).
- the embodiment of the present invention may also use the same fuzzy coefficient for all pixel units included in the background area, and details are not described herein again.
- the fuzzy coefficient m i corresponding to the pixel unit may be calculated according to the following formula 1;
- m is the default value of the fuzzy coefficient
- f( ⁇ d) is a monotonically decreasing function or a monotonically increasing function
- ⁇ d
- Depth_i is the depth of field depth of the pixel unit
- Depth is the foreground Depth of field depth.
- the preset value of the blurring coefficient may be manually adjusted by the user, or may be set to a preset value by default when the terminal is initialized.
- the terminal can display the adjustment effect in the adjusted next frame image to achieve the real-time performance of the preview; and the adjustment interface for adjusting the coefficient can be set on the display interface of the terminal;
- the adjustment button can be a virtual button, or a physical button or a combination button.
- Step 28 The terminal performs blurring processing on the background area Ib according to the blurring coefficient m i .
- the manner of performing the blurring processing on the background area Ib may include various fuzzy processing modes in the prior art, and is not enumerated here, and is only described by means of mean filtering. .
- FIG. 4(A) shows a schematic diagram of performing background blurring by using the mean filtering f(m) method provided by the embodiment of the present invention
- FIG. 4(A) shows weights of five pixel points, these five
- the pixels are: a central pixel and four adjacent pixels of the central pixel.
- An averaging filter f(m) may be provided in the embodiment of the present invention. That is to say, the pixel value of any pixel will be replaced by the weight and its own four pixel points adjacent to it.
- the weight of the center pixel point may be 1.
- the weight distribution principle of these five pixels is: the weight of the central pixel is R, and the range of R is [1/5, 1], and the weights of other adjacent points are
- the fuzzy process can be a convolution process:
- FIG. 4(B) shows the method of using the mean filtering F(m) provided by the embodiment of the present invention.
- FIG. 4(B) shows the weights of nine pixel points, which are: one central pixel point and eight adjacent pixel points of the central pixel point. That is to say: the pixel value of any pixel will be replaced by the weight and its own 8 pixel points adjacent to it.
- the weight of the center pixel point may be 1.
- the weight distribution principle of these nine pixels is: the weight of the central pixel is R, and the range of R is [1/9, 1], and the weights of other adjacent points are
- the fuzzy process can be a convolution process:
- a larger size filter can be used for convolution processing.
- Step 29 The terminal displays the processed image I' onto the screen.
- FIG. 3 is a schematic flowchart of performing background blurring during shooting according to an embodiment of the present invention. As shown in FIG. 3, the process may include:
- Steps 31 to 39 are the same as the preview process described in the above steps 21 to 29, and are not described herein again. After performing step 39, proceeding to the following step 310
- Step 310 The terminal determines whether it is necessary to perform a photographing operation. If yes, step 31 is performed; otherwise, it jumps back to step 31.
- Step 311 The terminal performs blur processing on the background area of the captured photo according to the foreground depth depth Depth and the blur coefficient m i of the current time target area.
- the background area of the captured photo may be blurred according to the same implementation manner as the above step 28, and details are not described herein again.
- Step 312 The terminal saves and displays the processed image on the screen.
- the terminal in the dual camera shooting preview mode, can synchronously obtain a preview of the current scene from different angles; the terminal according to the distance and angle between the dual camera modules The generated phase difference is used to calculate the depth of field of each pixel unit included in any preview image of the two preview images, thereby determining the depth information of each pixel unit in the preview image of the current scene;
- the foreground of the preset target area in the preview The depth of field depth Depth, and in the same preview, the area composed of all the pixel units whose depth of field is outside the range of [Depth-d, Depth+d] is determined as the background area; the background area is virtualized by the preset blurring coefficient
- the terminal outputs the image including the target area and the blurred background area as a real-time preview of the current scene.
- the embodiment of the present invention provides a background in the preview process.
- the scheme of blurring effect is convenient for the user to perform image shooting after predicting the background blur effect in advance, thereby improving the
- FIG. 5 is a schematic structural diagram of an image preview device according to an embodiment of the present invention. As shown in FIG. 5, the device includes :
- the obtaining module 51 is configured to synchronously acquire a preview image of the current scene from different angles by using two camera modules in the dual camera shooting preview mode;
- a calculating module 52 configured to calculate a depth of field of each pixel unit included in a preview image according to a phase difference generated by a distance and an angle between the dual camera modules in the preview image;
- the processing module 53 is configured to obtain a foreground depth depth Depth of the preset target area in the preview image, and form, in the same preview image, all pixel units whose depth of field is outside the range of [Depth-d, Depth+d]
- the area is determined as a background area; the background area is blurred by using a preset blurring coefficient; wherein d is a preset threshold;
- the output module 54 is configured to output an image including the target area and the blurred background area as a real-time preview of the current scene
- the pixel unit is specifically a pixel point or a pixel block composed of a plurality of pixel points.
- the obtaining module 51 is specifically configured to:
- the original image of the current scene is acquired synchronously from two different angles by using two camera modules; the original image is separately image-compressed and differentially processed according to a preset scaling ratio and a differential coefficient, respectively, to obtain preview images of the same size.
- Shooting module for obtaining after previewing the live view and after receiving the shooting command An original image of the current scene; determining, according to the background region determined in the preview image of the current scene, a corresponding background region in the acquired original image, using the preset blurring coefficient, in the original image
- the background area is blurred
- the output module 54 is further configured to output an image including the target area in the original image and the blurred background area in the original image as the captured image.
- the processing module 53 is configured to separately calculate a difference between a depth of field of each pixel unit included in the background area and a depth of the foreground depth, in a pre-generated correspondence relationship of the fuzzy coefficients. Determining a blurring coefficient corresponding to the difference value; and using the determined blurring coefficient to perform a blurring process on the corresponding pixel unit.
- the focusing module is configured to determine the target area according to the size and coordinates of the target area indicated by the focus instruction after receiving the focus command, or determine the target area according to the size and coordinates of the target area set by default.
- the processor 62 is configured to calculate a depth of field of each pixel unit included in a preview image according to a phase difference generated by a distance and an angle between the dual camera modules in the preview image; Preview the foreground depth depth Depth of the preset target area in the image, and in the same preview image, determine the area composed of all the pixel units whose depth depth is outside the range of [Depth-d, Depth+d] as the background area; Presetting a blurring coefficient, and performing blurring on the background area; wherein The d is a preset threshold; the image including the target area and the blurred background area is output as a real-time preview of the current scene; wherein the pixel unit is specifically a pixel or a plurality of pixels a pixel block formed by dots;
- the present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
- the computer program instructions can be provided to a general purpose computer, a special purpose computer, an embedded processor, or a processor of other programmable data processing device such that instructions executed by a processor of the computer or other programmable data processing device can be implemented in a flowchart
Abstract
Description
Claims (13)
- 一种图像的预览方法,其特征在于,该方法包括:在双摄像头拍摄预览模式下,终端利用两个摄像头模组,从不同角度同步获取当前场景的预览图;所述终端在所述预览图中,根据所述双摄像头模组之间的距离和角度所产生的相位差,计算出一预览图中所包含的各像素单元的景深深度;所述终端获取该预览图中的预设目标区域的前景景深深度Depth,并在同一预览图中,将景深深度在[Depth-d,Depth+d]范围以外的所有像素单元所组成的区域确定为背景区域;利用预设模糊系数,对所述背景区域进行虚化处理;其中,所述d为预设阈值;所述终端将包含有所述目标区域和虚化处理后的背景区域的图像作为当前场景的实时预览图进行输出;其中,所述像素单元具体为像素点或由若干个像素点构成的像素块。
- 如权利要求1所述的方法,其特征在于,所述终端利用两个摄像头模组,从不同角度同步获取当前场景的预览图,具体为:所述终端利用两个摄像头模组,从两个不同角度同步获取当前场景的原始图像;按照预设缩放比例和差分系数,将所述原始图像分别进行图像压缩和差分处理,分别得到大小相同的预览图。
- 如权利要求2所述的方法,其特征在于,在输出实时预览图之后,该方法还包括:所述终端在接收到拍摄指令后,获取当前场景的原始图像;所述终端根据在当前场景的预览图中确定出的背景区域,在获取到的一原始图像中,确定出相应的背景区域,利用所述预设模糊系数,对该原始图像中的背景区域进行虚化处理;所述终端将包含有所述原始图像中的目标区域和所述原始图像中的虚化 处理后的背景区域的图像作为拍摄后的图像进行输出。
- 如权利要求1、2或3所述的方法,其特征在于,所述对所述背景区域进行虚化处理,具体包括:分别计算所述背景区域中所包含的各像素单元的景深深度与所述前景景深深度之间的差值,在预先生成的模糊系数对应关系中,确定与所述差值对应的模糊系数;利用确定出的模糊系数,对相应的像素单元进行虚化处理。
- 如权利要求4所述的方法,其特征在于,所述确定与所述差值对应的模糊系数,具体为:对于所述背景区域中所包含的各像素单元,按照如下公式一,计算该像素单元对应的模糊系数mi;所述公式一为:mi=m×f(Δd)其中,m为模糊系数的预设值;f(Δd)为单调递减函数或单调递增函数;Δd=|Depth_i-Depth|,Depth_i为该像素单元的景深深度,Depth为所述前景景深深度。
- 如权利要求1、2、3或5所述的方法,其特征在于,通过如下步骤设置目标区域:所述终端接收到对焦指令后,根据所述对焦指令所指示的目标区域的大小和坐标,确定出目标区域;或者所述终端根据默认设置的目标区域的大小和坐标,确定出目标区域。
- 一种图像的预览装置,其特征在于,该装置包括:获取模块,用于在双摄像头拍摄预览模式下,利用两个摄像头模组,从不同角度同步获取当前场景的预览图;计算模块,用于在所述预览图中,根据所述双摄像头模组之间的距离和角度所产生的相位差,计算出一预览图中所包含的各像素单元的景深深度;处理模块,用于获取该预览图中的预设目标区域的前景景深深度Depth, 并在同一预览图中,将景深深度在[Depth-d,Depth+d]范围以外的所有像素单元所组成的区域确定为背景区域;利用预设模糊系数,对所述背景区域进行虚化处理;其中,所述d为预设阈值;输出模块,用于将包含有所述目标区域和虚化处理后的背景区域的图像作为当前场景的实时预览图进行输出;其中,所述像素单元具体为像素点或由若干个像素点构成的像素块。
- 如权利要求7所述的装置,其特征在于,所述获取模块具体用于:利用两个摄像头模组,从两个不同角度同步获取当前场景的原始图像;按照预设缩放比例和差分系数,将所述原始图像分别进行图像压缩和差分处理,分别得到大小相同的预览图。
- 如权利要求8所述的装置,其特征在于,该装置还包括:拍摄模块,用于在输出实时预览图之后,并在接收到拍摄指令后,获取当前场景的原始图像;根据在当前场景的预览图中确定出的背景区域,在获取到的一原始图像中,确定出相应的背景区域,利用所述预设模糊系数,对该原始图像中的背景区域进行虚化处理;所述输出模块还用于:将包含有所述原始图像中的目标区域和所述原始图像中的虚化处理后的背景区域的图像作为拍摄后的图像进行输出。
- 如权利要求7、8或9所述的装置,其特征在于,所述处理模块具体用于:分别计算所述背景区域中所包含的各像素单元的景深深度与所述前景景深深度之间的差值,在预先生成的模糊系数对应关系中,确定与所述差值对应的模糊系数;利用确定出的模糊系数,对相应的像素单元进行虚化处理。
- 如权利要求10所述的装置,其特征在于,所述处理模块具体用于:对于所述背景区域中所包含的各像素单元,按照如下公式一,计算该像素单元对应的模糊系数mi;所述公式一为:mi=m×f(Δd)其中,m为模糊系数的预设值;f(Δd)为单调递减函数或单调递增函数;Δd=|Depth_i-Depth|,Depth_i为该像素单元的景深深度,Depth为所述前景景深深度。
- 如权利要求7、8、9或11所述的装置,其特征在于,该装置还包括:对焦模块,用于接收到对焦指令后,根据所述对焦指令所指示的目标区域的大小和坐标,确定出目标区域;或者根据默认设置的目标区域的大小和坐标,确定出目标区域。
- 一种终端,其特征在于,该终端包括:双摄像头模组,用于从不同角度同步获取当前场景的预览图;处理器,用于在所述预览图中,根据所述双摄像头模组之间的距离和角度所产生的相位差,计算出一预览图中所包含的各像素单元的景深深度;获取该预览图中的预设目标区域的前景景深深度Depth,并在同一预览图中,将景深深度在[Depth-d,Depth+d]范围以外的所有像素单元所组成的区域确定为背景区域;利用预设模糊系数,对所述背景区域进行虚化处理;其中,所述d为预设阈值;将包含有所述目标区域和虚化处理后的背景区域的图像作为当前场景的实时预览图进行输出;其中,所述像素单元具体为像素点或由若干个像素点构成的像素块;显示屏,用于显示由所述处理器输出的实时预览图。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/941,773 US10334153B2 (en) | 2015-07-24 | 2018-03-30 | Image preview method, apparatus and terminal |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510443350.X | 2015-07-24 | ||
CN201510443350.XA CN105100615B (zh) | 2015-07-24 | 2015-07-24 | 一种图像的预览方法、装置及终端 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/941,773 Continuation US10334153B2 (en) | 2015-07-24 | 2018-03-30 | Image preview method, apparatus and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017016050A1 true WO2017016050A1 (zh) | 2017-02-02 |
Family
ID=54580056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/089066 WO2017016050A1 (zh) | 2015-07-24 | 2015-09-07 | 一种图像的预览方法、装置及终端 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10334153B2 (zh) |
CN (1) | CN105100615B (zh) |
WO (1) | WO2017016050A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3493520A1 (en) * | 2017-11-30 | 2019-06-05 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for dual-camera-based imaging, mobile terminal and storage medium |
WO2019134505A1 (zh) * | 2018-01-05 | 2019-07-11 | Oppo广东移动通信有限公司 | 图像虚化方法、存储介质及电子设备 |
CN110728632A (zh) * | 2019-09-04 | 2020-01-24 | 北京奇艺世纪科技有限公司 | 图像模糊处理方法、装置、计算机设备和存储介质 |
Families Citing this family (71)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105516586A (zh) * | 2015-12-01 | 2016-04-20 | 小米科技有限责任公司 | 图片拍摄方法、装置及系统 |
CN105578070A (zh) * | 2015-12-21 | 2016-05-11 | 深圳市金立通信设备有限公司 | 一种图像处理的方法及终端 |
CN105611154A (zh) * | 2015-12-21 | 2016-05-25 | 深圳市金立通信设备有限公司 | 一种图像处理的方法及终端 |
CN106921825A (zh) * | 2015-12-24 | 2017-07-04 | 西安中兴新软件有限责任公司 | 一种聚焦成像的方法、装置和终端 |
CN105681627B (zh) * | 2016-03-03 | 2019-12-24 | 联想(北京)有限公司 | 拍摄图像方法及电子设备 |
CN105847674B (zh) * | 2016-03-25 | 2019-06-07 | 维沃移动通信有限公司 | 一种基于移动终端的预览图像处理方法及移动终端 |
CN105898145A (zh) * | 2016-05-03 | 2016-08-24 | 深圳市金立通信设备有限公司 | 一种拍照方法及终端 |
CN107613199B (zh) * | 2016-06-02 | 2020-03-13 | Oppo广东移动通信有限公司 | 虚化照片生成方法、装置和移动终端 |
CN105979165B (zh) * | 2016-06-02 | 2019-02-05 | Oppo广东移动通信有限公司 | 虚化照片生成方法、装置和移动终端 |
CN105933532A (zh) * | 2016-06-06 | 2016-09-07 | 广东欧珀移动通信有限公司 | 图像处理方法、装置和移动终端 |
CN105933613A (zh) * | 2016-06-28 | 2016-09-07 | 广东欧珀移动通信有限公司 | 一种图像处理的方法、装置及移动终端 |
CN106251388A (zh) * | 2016-08-01 | 2016-12-21 | 乐视控股(北京)有限公司 | 照片处理方法和装置 |
CN106412421A (zh) * | 2016-08-30 | 2017-02-15 | 成都丘钛微电子科技有限公司 | 一种快速生成大尺寸重对焦图像的系统与方法 |
CN106504280A (zh) * | 2016-10-17 | 2017-03-15 | 努比亚技术有限公司 | 一种浏览视频的方法及终端 |
CN106357980A (zh) * | 2016-10-19 | 2017-01-25 | 广东欧珀移动通信有限公司 | 图像虚化处理方法、装置和移动终端 |
CN106454118A (zh) * | 2016-11-18 | 2017-02-22 | 上海传英信息技术有限公司 | 照片虚化方法和移动终端 |
CN106657782B (zh) * | 2016-12-21 | 2020-02-18 | 努比亚技术有限公司 | 一种图片处理方法和终端 |
CN106803920B (zh) * | 2017-03-17 | 2020-07-10 | 广州视源电子科技股份有限公司 | 一种图像处理的方法、装置及智能会议终端 |
CN106981044B (zh) * | 2017-03-20 | 2020-06-23 | 成都通甲优博科技有限责任公司 | 一种图像虚化方法及系统 |
CN106960413A (zh) * | 2017-03-24 | 2017-07-18 | 深圳市金立通信设备有限公司 | 一种图像虚化的方法及终端 |
CN108629779A (zh) * | 2017-03-24 | 2018-10-09 | 上海传英信息技术有限公司 | 智能抠图的方法和移动终端 |
CN107197138A (zh) * | 2017-03-31 | 2017-09-22 | 努比亚技术有限公司 | 一种拍摄装置、方法和移动终端 |
CN107018331A (zh) * | 2017-04-19 | 2017-08-04 | 努比亚技术有限公司 | 一种基于双摄像头的成像方法及移动终端 |
CN107240072B (zh) * | 2017-04-27 | 2020-06-05 | 南京秦淮紫云创益企业服务有限公司 | 一种屏幕亮度调节方法、终端和计算机可读存储介质 |
CN107172346B (zh) * | 2017-04-28 | 2020-02-07 | 维沃移动通信有限公司 | 一种虚化方法及移动终端 |
TWI636316B (zh) * | 2017-05-05 | 2018-09-21 | 致伸科技股份有限公司 | 通訊裝置及其光學裝置 |
CN108873594A (zh) * | 2017-05-10 | 2018-11-23 | 致伸科技股份有限公司 | 通信装置及其光学装置 |
CN107392850B (zh) * | 2017-06-30 | 2020-08-25 | 联想(北京)有限公司 | 图像处理方法及其系统 |
CN107343144A (zh) * | 2017-07-10 | 2017-11-10 | 广东欧珀移动通信有限公司 | 双摄像头切换处理方法、装置及其设备 |
CN107404617A (zh) * | 2017-07-21 | 2017-11-28 | 努比亚技术有限公司 | 一种拍摄方法及终端、计算机存储介质 |
CN107438161A (zh) * | 2017-07-31 | 2017-12-05 | 广东欧珀移动通信有限公司 | 拍摄画面处理方法、装置和终端 |
US20190052791A1 (en) * | 2017-08-10 | 2019-02-14 | Olympus Corporation | Image processing apparatus and image processing method |
KR102344104B1 (ko) * | 2017-08-22 | 2021-12-28 | 삼성전자주식회사 | 이미지의 표시 효과를 제어할 수 있는 전자 장치 및 영상 표시 방법 |
CN107481186B (zh) * | 2017-08-24 | 2020-12-01 | Oppo广东移动通信有限公司 | 图像处理方法、装置、计算机可读存储介质和计算机设备 |
CN107370958B (zh) * | 2017-08-29 | 2019-03-29 | Oppo广东移动通信有限公司 | 图像虚化处理方法、装置及拍摄终端 |
CN109474780B (zh) | 2017-09-07 | 2023-07-25 | 虹软科技股份有限公司 | 一种用于图像处理的方法和装置 |
CN107635093A (zh) * | 2017-09-18 | 2018-01-26 | 维沃移动通信有限公司 | 一种图像处理方法、移动终端及计算机可读存储介质 |
CN107613210B (zh) * | 2017-09-30 | 2020-12-29 | 北京小米移动软件有限公司 | 图像显示方法及装置、终端、存储介质 |
CN108024054B (zh) * | 2017-11-01 | 2021-07-13 | Oppo广东移动通信有限公司 | 图像处理方法、装置、设备及存储介质 |
CN108055452B (zh) * | 2017-11-01 | 2020-09-18 | Oppo广东移动通信有限公司 | 图像处理方法、装置及设备 |
CN107948500A (zh) | 2017-11-01 | 2018-04-20 | 广东欧珀移动通信有限公司 | 图像处理方法和装置 |
CN107682639B (zh) * | 2017-11-16 | 2019-09-27 | 维沃移动通信有限公司 | 一种图像处理方法、装置及移动终端 |
CN107948520A (zh) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | 图像处理方法和装置 |
CN107948519B (zh) | 2017-11-30 | 2020-03-27 | Oppo广东移动通信有限公司 | 图像处理方法、装置及设备 |
CN108024056B (zh) | 2017-11-30 | 2019-10-29 | Oppo广东移动通信有限公司 | 基于双摄像头的成像方法和装置 |
CN107959778B (zh) | 2017-11-30 | 2019-08-20 | Oppo广东移动通信有限公司 | 基于双摄像头的成像方法和装置 |
CN108154514B (zh) * | 2017-12-06 | 2021-08-13 | Oppo广东移动通信有限公司 | 图像处理方法、装置及设备 |
CN108111749B (zh) * | 2017-12-06 | 2020-02-14 | Oppo广东移动通信有限公司 | 图像处理方法和装置 |
CN108108008A (zh) * | 2017-12-29 | 2018-06-01 | 深圳市金立通信设备有限公司 | 摄像头虚化控制方法、终端及计算机可读存储介质 |
CN108347558A (zh) * | 2017-12-29 | 2018-07-31 | 维沃移动通信有限公司 | 一种图像优化的方法、装置以及移动终端 |
CN108305223B (zh) * | 2018-01-09 | 2020-11-03 | 珠海格力电器股份有限公司 | 图像背景虚化处理方法及装置 |
CN110022430A (zh) * | 2018-01-10 | 2019-07-16 | 中兴通讯股份有限公司 | 图像虚化方法、装置、移动终端及计算机可读存储介质 |
CN110278366B (zh) | 2018-03-14 | 2020-12-01 | 虹软科技股份有限公司 | 一种全景图像虚化方法、终端及计算机可读存储介质 |
CN108900790B (zh) * | 2018-06-26 | 2021-01-01 | 努比亚技术有限公司 | 视频图像处理方法、移动终端及计算机可读存储介质 |
US11057553B2 (en) * | 2018-07-06 | 2021-07-06 | Samsung Electronics Co., Ltd. | Electronic device for capturing media using a bendable display and method thereof |
CN109688321B (zh) * | 2018-11-21 | 2021-09-07 | 惠州Tcl移动通信有限公司 | 电子设备及其图像显示方法、具有存储功能的装置 |
CN113056906A (zh) * | 2018-11-26 | 2021-06-29 | Oppo广东移动通信有限公司 | 用于拍摄类长焦图像的系统和方法 |
CN109615648B (zh) * | 2018-12-07 | 2023-07-14 | 深圳前海微众银行股份有限公司 | 景深数据转换方法、装置、设备及计算机可读存储介质 |
CN109862262A (zh) * | 2019-01-02 | 2019-06-07 | 上海闻泰电子科技有限公司 | 图像虚化方法、装置、终端及存储介质 |
CN110062157B (zh) * | 2019-04-04 | 2021-09-17 | 北京字节跳动网络技术有限公司 | 渲染图像的方法、装置、电子设备和计算机可读存储介质 |
CN110661971A (zh) * | 2019-09-03 | 2020-01-07 | RealMe重庆移动通信有限公司 | 图像拍摄方法、装置、存储介质及电子设备 |
CN112866549B (zh) * | 2019-11-12 | 2022-04-12 | Oppo广东移动通信有限公司 | 图像处理方法和装置、电子设备、计算机可读存储介质 |
CN112950692B (zh) * | 2019-11-26 | 2023-07-14 | 福建天晴数码有限公司 | 一种基于移动游戏平台的图像景深处理方法及其系统 |
CN110992284A (zh) * | 2019-11-29 | 2020-04-10 | Oppo广东移动通信有限公司 | 图像处理方法、图像处理装置、电子设备和计算机可读存储介质 |
CN113395434B (zh) * | 2020-03-11 | 2022-08-23 | 武汉Tcl集团工业研究院有限公司 | 一种预览图像虚化方法、存储介质及终端设备 |
CN113938578A (zh) * | 2020-07-13 | 2022-01-14 | 武汉Tcl集团工业研究院有限公司 | 一种图像虚化方法、存储介质及终端设备 |
CN114143442B (zh) * | 2020-09-03 | 2023-08-01 | 武汉Tcl集团工业研究院有限公司 | 图像虚化方法、计算机设备、计算机可读存储介质 |
CN113242353B (zh) * | 2021-03-22 | 2023-11-03 | 启美科技(江苏)有限公司 | 前方杆体逼近程度解析平台 |
CN113965695A (zh) * | 2021-09-07 | 2022-01-21 | 福建库克智能科技有限公司 | 一种图像显示的方法、系统、装置、显示单元和介质 |
CN115134532A (zh) * | 2022-07-26 | 2022-09-30 | Oppo广东移动通信有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN116112813B (zh) * | 2022-08-31 | 2024-03-19 | 荣耀终端有限公司 | 虚化方法和装置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101548232A (zh) * | 2006-06-29 | 2009-09-30 | 森纳拉科技有限责任公司 | 使用图像捕捉装置捕捉图像时提供背景虚化的方法和系统 |
CN103095978A (zh) * | 2011-11-03 | 2013-05-08 | 华晶科技股份有限公司 | 产生背景模糊的影像处理方法及其影像撷取装置 |
CN103152521A (zh) * | 2013-01-30 | 2013-06-12 | 广东欧珀移动通信有限公司 | 一种移动终端中实现景深效果的方法及移动终端 |
CN103871051A (zh) * | 2014-02-19 | 2014-06-18 | 小米科技有限责任公司 | 图像处理方法、装置和电子设备 |
CN103945118A (zh) * | 2014-03-14 | 2014-07-23 | 华为技术有限公司 | 图像虚化方法、装置及电子设备 |
CN104333700A (zh) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | 一种图像虚化方法和图像虚化装置 |
CN104463775A (zh) * | 2014-10-31 | 2015-03-25 | 小米科技有限责任公司 | 一种实现图片景深效果的方法及装置 |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5638342A (en) | 1995-07-07 | 1997-06-10 | Fossil, Inc. | Watch saddle |
DE10139749A1 (de) | 2001-08-13 | 2003-02-27 | Achim Gallinger | Element |
CA2686239A1 (en) | 2007-05-07 | 2008-11-13 | Cybiocare Inc. | Non-invasive pressured probing device |
US7817187B2 (en) * | 2007-06-27 | 2010-10-19 | Aptina Imaging Corporation | Image blur correction using a secondary camera |
WO2010111788A1 (en) | 2009-03-31 | 2010-10-07 | Cybiocare Inc. | Device for securing a physiological sensor to the body of a user |
CN201431031Y (zh) | 2009-04-28 | 2010-03-31 | 江忠波 | 透气手表带 |
JP2012027263A (ja) * | 2010-07-23 | 2012-02-09 | Sony Corp | 撮像装置、その制御方法およびプログラム |
CN102566403A (zh) | 2010-12-20 | 2012-07-11 | 上海市杨浦区齐齐哈尔路第一小学 | 夏季清凉手表 |
AT12899U1 (de) | 2011-09-08 | 2013-01-15 | Hirsch Armbaender | Atmungsaktives armband |
US8937646B1 (en) * | 2011-10-05 | 2015-01-20 | Amazon Technologies, Inc. | Stereo imaging using disparate imaging devices |
KR101792641B1 (ko) * | 2011-10-07 | 2017-11-02 | 엘지전자 주식회사 | 이동 단말기 및 그의 아웃 포커싱 이미지 생성방법 |
US9142010B2 (en) * | 2012-01-04 | 2015-09-22 | Audience, Inc. | Image enhancement based on combining images from multiple cameras |
CN202589516U (zh) | 2012-04-06 | 2012-12-12 | 南方医科大学 | 一种可穿戴式心电传感电极 |
CN104350735B (zh) * | 2012-05-28 | 2016-04-13 | 富士胶片株式会社 | 图像处理装置、摄像装置、图像处理方法 |
CN102722080B (zh) * | 2012-06-27 | 2015-11-18 | 杭州南湾科技有限公司 | 一种基于多镜头拍摄的多用途立体摄像方法 |
US9654761B1 (en) * | 2013-03-15 | 2017-05-16 | Google Inc. | Computer vision algorithm for capturing and refocusing imagery |
TWI631930B (zh) | 2013-04-01 | 2018-08-11 | 美盛醫電股份有限公司 | 生理訊號感測裝置 |
CN104424640B (zh) * | 2013-09-06 | 2017-06-20 | 格科微电子(上海)有限公司 | 对图像进行虚化处理的方法和装置 |
CN203465521U (zh) | 2013-11-19 | 2014-03-05 | 汪小群 | 一种弹性手表 |
EP2889715B1 (en) | 2013-12-27 | 2017-07-26 | Panasonic Intellectual Property Management Co., Ltd. | Wearable electronic device |
US20150265214A1 (en) | 2014-03-24 | 2015-09-24 | Samsung Electronics Co., Ltd. | Adjustable sensor support structure for optimizing skin contact |
CN103973977B (zh) * | 2014-04-15 | 2018-04-27 | 联想(北京)有限公司 | 一种预览界面的虚化处理方法、装置及电子设备 |
KR102220443B1 (ko) * | 2014-06-09 | 2021-02-25 | 삼성전자주식회사 | 깊이 정보를 활용하는 전자 장치 및 방법 |
KR102157675B1 (ko) * | 2014-07-25 | 2020-09-18 | 삼성전자주식회사 | 촬영 장치 및 그 촬영 방법 |
CN204192610U (zh) | 2014-11-04 | 2015-03-11 | 北京海思敏医疗技术有限公司 | 心电监测装置 |
CN104490384B (zh) | 2014-12-19 | 2017-02-22 | 广州视源电子科技股份有限公司 | 心率检测装置 |
CN104510456B (zh) | 2014-12-26 | 2017-04-12 | 深圳市倍轻松科技股份有限公司 | 手戴式血管硬度检测仪 |
-
2015
- 2015-07-24 CN CN201510443350.XA patent/CN105100615B/zh active Active
- 2015-09-07 WO PCT/CN2015/089066 patent/WO2017016050A1/zh active Application Filing
-
2018
- 2018-03-30 US US15/941,773 patent/US10334153B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101548232A (zh) * | 2006-06-29 | 2009-09-30 | 森纳拉科技有限责任公司 | 使用图像捕捉装置捕捉图像时提供背景虚化的方法和系统 |
CN103095978A (zh) * | 2011-11-03 | 2013-05-08 | 华晶科技股份有限公司 | 产生背景模糊的影像处理方法及其影像撷取装置 |
CN103152521A (zh) * | 2013-01-30 | 2013-06-12 | 广东欧珀移动通信有限公司 | 一种移动终端中实现景深效果的方法及移动终端 |
CN103871051A (zh) * | 2014-02-19 | 2014-06-18 | 小米科技有限责任公司 | 图像处理方法、装置和电子设备 |
CN103945118A (zh) * | 2014-03-14 | 2014-07-23 | 华为技术有限公司 | 图像虚化方法、装置及电子设备 |
CN104463775A (zh) * | 2014-10-31 | 2015-03-25 | 小米科技有限责任公司 | 一种实现图片景深效果的方法及装置 |
CN104333700A (zh) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | 一种图像虚化方法和图像虚化装置 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3493520A1 (en) * | 2017-11-30 | 2019-06-05 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for dual-camera-based imaging, mobile terminal and storage medium |
US10554898B2 (en) | 2017-11-30 | 2020-02-04 | Guangdong Oppo Mobile Telecommunications Corp. Ltd. | Method for dual-camera-based imaging, and mobile terminal |
WO2019134505A1 (zh) * | 2018-01-05 | 2019-07-11 | Oppo广东移动通信有限公司 | 图像虚化方法、存储介质及电子设备 |
CN110728632A (zh) * | 2019-09-04 | 2020-01-24 | 北京奇艺世纪科技有限公司 | 图像模糊处理方法、装置、计算机设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US10334153B2 (en) | 2019-06-25 |
CN105100615A (zh) | 2015-11-25 |
US20180227478A1 (en) | 2018-08-09 |
CN105100615B (zh) | 2019-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017016050A1 (zh) | 一种图像的预览方法、装置及终端 | |
CN108898567B (zh) | 图像降噪方法、装置及系统 | |
CN111641778B (zh) | 一种拍摄方法、装置与设备 | |
CN109474780B (zh) | 一种用于图像处理的方法和装置 | |
CN108600576B (zh) | 图像处理装置、方法和系统以及计算机可读记录介质 | |
TWI441514B (zh) | 魚眼影像校正及減輕透視變形的影像處理方法及相關影像處理裝置 | |
CN110473159B (zh) | 图像处理方法和装置、电子设备、计算机可读存储介质 | |
JP5036599B2 (ja) | 撮像装置 | |
TWI602152B (zh) | 影像擷取裝置及其影像處理方法 | |
US10827107B2 (en) | Photographing method for terminal and terminal | |
EP2881913A1 (en) | Image splicing method and apparatus | |
CN105282421B (zh) | 一种去雾图像获取方法、装置及终端 | |
CN110324532B (zh) | 一种图像虚化方法、装置、存储介质及电子设备 | |
WO2019105261A1 (zh) | 背景虚化处理方法、装置及设备 | |
CN107749944A (zh) | 一种拍摄方法及装置 | |
WO2015081555A1 (zh) | 双镜头设备的拍照方法及双镜头设备 | |
CN111062881A (zh) | 图像处理方法及装置、存储介质、电子设备 | |
CN112261387B (zh) | 用于多摄像头模组的图像融合方法及装置、存储介质、移动终端 | |
CN114390262A (zh) | 用于拼接三维球面全景影像的方法及电子装置 | |
CN101472064A (zh) | 拍摄系统及其景深处理方法 | |
WO2018196854A1 (zh) | 一种拍照方法、拍照装置及移动终端 | |
CN113395434B (zh) | 一种预览图像虚化方法、存储介质及终端设备 | |
TWI468772B (zh) | 影像拍攝設備及方法 | |
CN113905170A (zh) | 变焦控制方法、装置、存储介质及电子装置 | |
CN113938578A (zh) | 一种图像虚化方法、存储介质及终端设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15899402 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15899402 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.07.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15899402 Country of ref document: EP Kind code of ref document: A1 |