WO2018210318A1 - 图像虚化处理方法、装置、存储介质及电子设备 - Google Patents
图像虚化处理方法、装置、存储介质及电子设备 Download PDFInfo
- Publication number
- WO2018210318A1 WO2018210318A1 PCT/CN2018/087372 CN2018087372W WO2018210318A1 WO 2018210318 A1 WO2018210318 A1 WO 2018210318A1 CN 2018087372 W CN2018087372 W CN 2018087372W WO 2018210318 A1 WO2018210318 A1 WO 2018210318A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- pixel
- data
- value
- blurring
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 230000009977 dual effect Effects 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims description 50
- 238000003672 processing method Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 8
- 108010001267 Protein Subunits Proteins 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 13
- 238000004891 communication Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/815—Camera processing pipelines; Components thereof for controlling the resolution by using a single image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Definitions
- the embodiments of the present invention relate to image processing technologies, and in particular, to an image blurring processing method, apparatus, storage medium, and electronic device.
- the background blur of the image enables the subject to be clearly displayed and is very popular among photographers.
- the image blurring effect is mainly achieved by using the optical imaging principle and using a large lens aperture on the hardware. Therefore, the image blurring function is mainly integrated on a professional camera such as a SLR camera.
- a professional camera such as a SLR camera.
- the embodiment of the present application provides an image blurring processing technical solution.
- an image blurring processing method including: acquiring a main image and a secondary image obtained by capturing a same object by a dual camera; and acquiring depth data and depth confidence data according to the main image and the auxiliary image; Depth data indicating a depth value of each corresponding pixel point in the main picture and the auxiliary picture, the depth confidence data indicating a confidence level of each depth value in the depth data; according to the depth confidence data Correcting at least one depth value in the depth data; and performing blurring processing on the main image according to the modified depth data.
- the obtaining the depth confidence data according to the main image and the auxiliary image includes: if the corresponding pixel points in the main image and the auxiliary image have the same depth value, assigning the corresponding pixel point a depth confidence value that is relatively large relative to the reference value; and/or, if the depth value of the pixel point in the main image exceeds a preset range, giving a depth confidence value that is smaller than a reference value of the pixel beyond the preset range And/or, if the pixel points in the main map have two or more depth values, a depth confidence value is given to a pixel having two or more depth values that is smaller than the reference value.
- the modifying the at least one depth value in the depth data according to the depth confidence data comprises: replacing a depth value of a neighboring pixel with a highest depth confidence value with a lowest depth confidence value The depth value of the pixel.
- the method before performing the blurring process on the main image according to the modified depth data, the method further includes: performing denoising processing on the depth data.
- the denoising process includes: filtering the depth data by using a filter; and/or increasing each depth value in the depth data by a preset ratio.
- the obtaining the depth data according to the main image and the auxiliary image includes: performing stereo matching on the main image and the auxiliary image to obtain initial depth data; and performing depth calibration on the initial depth data to make The main map and the corresponding pixel of the auxiliary map are located at the same depth, and the depth data is obtained.
- performing the blurring process on the main image according to the modified depth data including: acquiring, according to the modified depth data, blurring expected data of each first pixel in the main image; The imaginary desired data of one pixel is subjected to blurring processing on the main image.
- the imagining the main image according to the ambiguous expected data of each first pixel includes: generating a imaginary map corresponding to the first pixel of the main image and having a pixel value as an initial value Determining an initial blur weight value of the corresponding second pixel point in the blur map according to the blur desired data of each first pixel in the main image; and at least one second pixel in the blur map Updating the point at least once, the updating, comprising: updating the corresponding second pixel point according to a pixel value of the first pixel point and a current blurring weight value of the second pixel point corresponding to the first pixel point a current pixel value of the at least one adjacent second pixel and a current blurring weight value; obtaining a blurring processing result of the main image according to the updated blur map.
- a distance between the adjacent second pixel point and the corresponding second pixel point satisfies a setting requirement.
- the blurring desired data of the first pixel point includes: a blurring radius; a distance between the adjacent second pixel point and the corresponding second pixel point satisfies a setting requirement, including: The distance between the adjacent second pixel point and the corresponding second pixel point is less than or equal to the blur radius.
- the obtaining, according to the updated ambiguization map, the result of the ambiguous processing of the main image comprising: according to the current pixel value of each second pixel in the updated ambiguous map and the current virtual
- the weight value is normalized, and the pixel values of the second pixel points in the blur map are normalized, and the normalized processed blur map is used as the blur processing result.
- the obtaining, according to the modified depth data, the blurring desired data of each first pixel in the main image comprising: determining, according to the depth data, each first pixel in the main image and the The depth difference of the predetermined focus point in the main picture; the blurring desired data of each of the first pixel points is respectively determined according to each depth difference value.
- the method before the obtaining the virtualized expected data of each first pixel in the main image according to the modified depth data, the method further includes:
- an image blurring processing apparatus including: a first acquiring module, configured to acquire a main image and a supplementary image obtained by capturing a same object by a dual camera; and a second acquiring module, configured to: Detaining depth data and depth confidence data according to the main map and the auxiliary map, the depth data indicating depth values of respective pixel points in the main map and the auxiliary map, the depth confidence data indicating the depth a confidence level of each depth value in the data; a correction module, configured to correct at least one depth value in the depth data according to the depth confidence data; and a blurring module, configured to perform the main image according to the modified depth data Blurring processing.
- the second obtaining module includes a first acquiring unit, if the corresponding pixel points in the main image and the auxiliary image have the same depth value, and the corresponding pixel points are given relative reference values. a large depth confidence value; and/or, if the depth value of the pixel in the main image exceeds a preset range, giving a depth confidence value that is smaller than a reference value of the pixel beyond the preset range; and/or If the pixel points in the main picture have two or more depth values, a depth confidence value is given to a pixel point having two or more depth values that is smaller than the reference value.
- the correction module is configured to replace the depth value of the pixel having the lowest depth confidence value with the depth value of the adjacent pixel having the highest depth confidence value.
- the method further includes: a denoising module, configured to perform denoising processing on the depth data.
- a denoising module configured to perform denoising processing on the depth data.
- the denoising module includes: a filtering unit, configured to filter the depth data by using a filter; and/or an increasing unit, configured to increase each depth value in the depth data according to a preset ratio Big.
- the second obtaining module includes: a second acquiring unit, configured to perform stereo matching on the main image and the auxiliary image to obtain initial depth data; and a third acquiring unit, configured to use the initial depth data
- the depth calibration is performed such that the main map and the corresponding pixel of the auxiliary map are at the same depth, and the depth data is obtained.
- the imaginary module includes: a fourth acquiring unit, configured to acquire, according to the modified depth data, ambiguous expected data of each first pixel in the main image;
- the imaginary desired data of one pixel is subjected to blurring processing on the main image.
- the imaginary unit includes: a generating subunit, configured to generate a pixmap corresponding to a first pixel of the main image and having a pixel value as an initial value; and determining a subunit, configured to be used according to the main image
- the blurring desired data of each of the first pixel points respectively determines an initial blurring weight value of the corresponding second pixel point in the blurring map
- the updating subunit is configured to target at least one second pixel in the blurring map Updating the point at least once, the updating, comprising: updating the corresponding second pixel point according to a pixel value of the first pixel point and a current blurring weight value of the second pixel point corresponding to the first pixel point a current pixel value and a current blurring weight value of the at least one adjacent second pixel; the blurring subunit, configured to obtain a blurring processing result of the main image according to the updated blur map.
- a distance between the adjacent second pixel point and the corresponding second pixel point satisfies a setting requirement.
- the blurring desired data of the first pixel point includes: a blurring radius; a distance between the adjacent second pixel point and the corresponding second pixel point satisfies a setting requirement, including: The distance between the adjacent second pixel point and the corresponding second pixel point is less than or equal to the blur radius.
- the imaginary sub-unit is configured to compare each second pixel in the imaginary map according to a current pixel value of each second pixel in the updated imaginary map and a current imaginary weight value.
- the pixel value of the point is normalized, and the normalized image is used as the result of the blurring process.
- the fourth obtaining unit includes: a first determining subunit, configured to determine, according to the depth data, a depth difference between each first pixel point in the main image and a predetermined focus point in the main image; And a second determining subunit, configured to respectively determine blurring expected data of each first pixel point according to each depth difference value.
- the fourth obtaining unit further includes: an acquiring subunit, configured to acquire the input focus information.
- a storage medium storing at least one executable instruction adapted to be loaded by a processor and to perform an image as described above The operation corresponding to the blurring method.
- an electronic device includes: a processor and a memory; the memory is configured to store at least one executable instruction, the executable instruction causing the processor The operation corresponding to the image blurring processing method as described in any of the above is performed.
- a computer program comprising computer readable code, wherein when the computer readable code is run on a device, a processor in the device performs An instruction of the image blurring method described.
- the depth data and the depth confidence data of the main image and the auxiliary image of the same object are captured by acquiring the dual camera, and corrected by the depth confidence data.
- the depth data effectively improves the accuracy of the depth data.
- the modified depth data is used to blur the main image, which can improve the blurring effect on the main image.
- FIG. 1 is a flow chart of an image blurring processing method according to an embodiment of the present application.
- FIG. 2 is a flowchart of an image blurring processing method according to another embodiment of the present application.
- FIG. 3 is a main diagram of dual camera shooting according to another embodiment of the present application.
- FIG. 4 is a supplementary diagram of dual camera shooting according to another embodiment of the present application.
- FIG. 5 is a depth diagram of a main diagram according to another embodiment of the present application.
- FIG. 6 is a main diagram of a blush process according to another embodiment of the present application.
- FIG. 7 is a logic block diagram of an image blurring processing apparatus according to an embodiment of the present application.
- FIG. 8 is a logic block diagram of an image blurring processing apparatus according to another embodiment of the present application.
- FIG. 9 is a logic block diagram of a blurring module of an image blurring processing apparatus according to another embodiment of the present application.
- FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
- Embodiments of the present application can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, and the like include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients Machines, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the above, and the like.
- Electronic devices such as terminal devices, computer systems, servers, etc., can be described in the general context of computer system executable instructions (such as program modules) being executed by a computer system.
- program modules may include routines, programs, target programs, components, logic, data structures, and the like that perform particular tasks or implement particular abstract data types.
- the computer system/server can be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices that are linked through a communication network.
- program modules may be located on a local or remote computing system storage medium including storage devices.
- FIG. 1 is a flow chart of an image blurring processing method according to an embodiment of the present application.
- step S110 a main map and a supplementary map obtained by the dual camera capturing the same object are acquired.
- the dual camera can capture the same scene at different angles to obtain two images, which are the main image and the auxiliary image (or left and right images). Which of the two pictures is the main picture and which is the auxiliary picture, before the two cameras are shipped from the factory Pre-set methods to determine.
- the dual camera can be set on a mobile intelligent terminal that is limited in thickness and cannot integrate a large aperture lens, for example, a dual camera on a smartphone.
- the main picture is the picture finally presented to the user.
- the main image captured by the dual camera is blurred, so as to improve the blurring effect on the main image.
- the operation S110 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first acquisition module 310 executed by the processor.
- step S120 depth data and depth confidence data are acquired according to the main map and the auxiliary map.
- the depth data indicates depth values of corresponding pixel points in the main image and the auxiliary image, and the depth confidence data indicates a confidence level of each depth value in the depth data.
- the depth confidence data indicates the confidence of each depth value in the depth data, and can indicate the accuracy of the depth data, that is, the depth confidence data of the main map and the auxiliary map can respectively represent the acquired main map and the auxiliary map respectively.
- the depth value is the distance from the camera corresponding to the pixel in the captured picture (main picture or auxiliary picture).
- depth data of the main image and the auxiliary image may be obtained by stereo matching the main image and the auxiliary image, or by using other image processing techniques and using a deep neural network to process the main image and the auxiliary image, but Not limited to this.
- the operation S120 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a second acquisition module 320 executed by the processor.
- the depth value with lower confidence in the depth data of the main image is corrected, so that the depth value of each pixel in the main image indicated by the depth data of the main image is more accurate.
- the operation S130 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a correction module 330 executed by the processor.
- step S140 the main image is blurred according to the corrected depth data.
- the depth data of the main image is corrected according to the depth confidence data of the main image
- the simulated blur data for performing the blur rendering is calculated according to the modified depth data of the main image
- the partial region in the main image is Perform blurring, or adjust the pixel values of some pixels in the main image to complete the blur rendering of the main image. Since the depth data of the main image is corrected by the depth confidence data, the pixel value of each pixel in the main image can be more accurately indicated, and further, the blur processing is performed according to the corrected depth data, thereby effectively improving the main
- the blurring effect of the figure solves the problem that the image taken by the dual-camera mobile phone has no blurring effect or has a weak blurring effect.
- the operation S140 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a blurring module 340 executed by the processor.
- the depth data and the depth confidence data of the main image and the auxiliary image of the same object are captured by acquiring the dual camera, and the depth data is corrected by the depth confidence data, thereby effectively improving the depth data.
- the modified main image is blurred by the corrected depth data, which can improve the blurring effect on the main image.
- the image blurring processing method of this embodiment may be performed by a camera, an image processing program, or an intelligent terminal having an imaging function, etc., but it should be apparent to those skilled in the art that in practical applications, any image has a corresponding image.
- the image blurring processing method of the embodiment of the present application can be executed by referring to the embodiment.
- FIG. 2 is a flow chart of an image blurring processing method according to another embodiment of the present application.
- step S210 a main image and a supplementary image obtained by the dual camera capturing the same object are acquired.
- the main picture and the auxiliary picture are two pictures obtained by shooting the same scene at different angles by the dual camera.
- the positions of the toy doll ears near the edge of the picture in the main picture and the auxiliary picture are different (the mouse on the desktop) The position of the mat is different).
- the operation S210 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a first acquisition module 310 executed by the processor.
- step S220 depth data and depth confidence data are acquired according to the main map and the auxiliary map.
- the depth data indicates depth values of corresponding pixel points in the main picture and the auxiliary picture, and the depth confidence data indicates a confidence level of each depth value in the depth data.
- the initial depth data is obtained by performing stereo matching on the main image and the auxiliary image, and further, the initial depth data is deeply calibrated so that the corresponding pixel points of the main image and the auxiliary image are at the same depth, Further, the depth data of the calibrated main image and the auxiliary image are obtained.
- the stereo depth matching method can quickly and accurately acquire the initial depth data.
- the initial depth data is calibrated, and the dual camera may be slightly displaced or rotated due to collision or the like, causing the corresponding pixels of the main picture and the auxiliary picture not to be at the same depth, so that the main picture and the auxiliary picture correspond to the pixel. The points are at the same depth to avoid affecting subsequent image processing operations.
- the depth confidence data of the main map is also acquired. For example, if the corresponding pixel points in the main image and the auxiliary image have the same depth value, the depth value corresponding to the reference value is given to the depth value of the corresponding pixel point; if the corresponding pixel in the main image and the auxiliary image Having different depth values gives a depth confidence value that the depth value of the corresponding pixel is smaller than the reference value.
- the depth value of the pixel point in the main image exceeds the preset range, the depth value of the pixel value exceeding the preset range is given a smaller depth confidence value than the reference value; if the depth value of the pixel point in the main image If the preset range is not exceeded, a depth confidence value corresponding to the reference value of the corresponding pixel point is given. And/or, if the pixel points in the main image have two or more depth values, assigning a depth confidence value that the pixel values of the pixel points having two or more depth values are smaller than the reference value; If the pixel points in the main picture have the same depth value, the depth value corresponding to the reference value is given to the depth value of the corresponding pixel point.
- the depth data and the depth confidence data are a depth map and a confidence map, respectively.
- the value of each pixel point in the depth map represents the depth value of the corresponding first pixel point in the main image.
- the value of each pixel in the depth confidence map (not shown) corresponding to the main map represents the confidence of the depth value of the corresponding first pixel.
- the depth map and the confidence map of the main image may be the same size as the main image.
- the operation S220 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by a second acquisition module 320 executed by the processor.
- step S230 at least one depth value in the depth data is corrected according to the depth confidence data, and the depth data is subjected to denoising processing.
- the depth of the pixel having the lowest depth confidence value is replaced by the depth value of the adjacent pixel having the highest depth confidence value.
- the value is to avoid a large error in the depth value determined by each pixel in the main picture, so that the depth value indicated by the depth data is more accurate, and the accuracy of the depth data is improved.
- the depth data can also be denoised.
- the denoising process may include filtering the depth data by using a filter, and/or increasing the depth values in the depth data by a preset ratio.
- a smoothing filter is adopted such that pixels of similar colors in the main image have similar depth values, further improving the accuracy of the depth data.
- the operation S230 may be performed by a processor invoking a corresponding instruction stored in a memory, or may be performed by a correction module 330 and a denoising module 350 that are executed by the processor.
- step S240 a depth difference between each of the first pixel points in the main image and a predetermined focus point in the main image is determined according to the depth data.
- the focus information of the main map is acquired by inputting before the step is performed.
- the user may select a point or region in the main image to perform a click operation, or input data such as coordinates of a point or region in the main image, to the point. Or area as the focus point or focus area of the main image.
- the main figure includes a person and a vehicle, and the user can click on the person as a focus point, and by performing the image blurring processing method of the embodiment, the person in the main picture is displayed more clearly and the vehicle in the main picture is displayed. And other background areas are more ambiguous.
- the step of performing the step may also directly acquire the information of the focus point that has been determined in the main picture.
- the user selects the focus point to be the focus point of the auto focus selection when the user takes the main picture.
- the operation S240 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by a fourth acquisition unit 341 in the blurring module 340 being executed by the processor.
- step S250 the blurring expectation data of each first pixel point is determined according to each depth difference value.
- the blurring expectation data of each first pixel point is calculated according to the depth difference between each first pixel point and the predetermined focus point in the main image, and is used to indicate that each first pixel point in the main image is virtualized.
- the blurring desired data includes, but is not limited to, a blur radius or a diameter equal length
- the blur path length may include, but is not limited to, information such as a radius or a diameter of a circle of confusion of the pixel after blurring.
- the blurring desired data of the first pixel point includes a blurring radius.
- the first pixel point When d is not equal to d 0 , the first pixel point is away from the predetermined focus point, and the closer the distance is, the smaller the blur radius c is; the farther the distance is, the larger the blur radius c is. That is to say, in the main picture, the predetermined focus point is not blurred; when the focus area near the predetermined focus point is blurred, the blurring intensity is small; and the area far from the predetermined focus point is blurred.
- the ambiguity of the time is large, and the farther the distance is, the greater the ambiguity.
- the operation S250 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by a fourth acquisition unit 341 in the blurring module 340 being executed by the processor.
- step S260 the main image is blurred according to the blurring desired data of each of the first pixel points.
- the method for blur rendering the main image according to the acquired blurring desired data includes: generating a blur map corresponding to the first pixel point of the main image and having a pixel value as an initial value; Defining the expected data of each of the first pixel points in the figure to determine an initial blurring weight value of the corresponding second pixel point in the blurring map; performing at least one update and updating on at least one second pixel point in the blurring map
- the method includes: updating a current pixel value of at least one adjacent second pixel of the corresponding second pixel according to a pixel value of the first pixel and a current blur weight value of the second pixel corresponding to the first pixel
- the current blurring weight value; the blurring result of the main graph is obtained according to the updated blur map.
- the first pixel and the second pixel can each be represented by coordinates (x, y). It should be noted here that in practical applications, it is also possible to make a virtualized map of the main image, and then perform steps S210 to S250 to obtain the blurred expected data of the main image.
- the initial blurring weight value of each second pixel in the blur map is obtained according to the blurring desired data, and is used for blurring when imaging by a lens with a large aperture (for example, a single-lens reflex camera).
- the imaginary expected data includes a ambiguous radius.
- Each of the second pixel points (x, y) determines a respective initial blurring weight value w(x, y).
- c(x, y) is the blur radius of the first pixel point (x, y). That is, the larger the blur radius of the first pixel point, the smaller the initial blur weight value of the corresponding second pixel point.
- the distance between the adjacent second pixel point and the corresponding second pixel point satisfies a setting requirement.
- the setting is required to be less than or equal to the blur radius, that is, the blur radius of the first pixel point is greater than the distance between the corresponding second pixel point and the adjacent second pixel point.
- the current pixel value is once Update; obtain a new w(x', y') by accumulating w(x, y) on the basis of w(x', y'), and update the current blur weight value once.
- the blur map is updated by continuously updating the current pixel value and the current blur weight value of each second pixel point until all the second pixel points complete the update.
- the pixel values of the second pixel points in the imaginary image are normalized according to the current pixel value of each second pixel in the updated imaginary map and the current imaginary weight value, and the normalized The blurred map after the processing is treated as a result of the blurring process.
- the current pixel value of each second pixel point is normalized according to the current pixel value and the current blur weight value of each second pixel point after the update, to obtain the pixels of each second pixel point. value. That is, the pixel value of the second pixel is the ratio of the updated current pixel value to the current blur weight value.
- the pixel values of the second pixel points in the blur map are determined according to the obtained pixel values, and the processed blur map is determined as the blur processing result of the main image.
- the operation S260 may be performed by a processor invoking a corresponding instruction stored in the memory, or may be performed by a blurring unit in the blurring module 340 being executed by the processor.
- the main image subjected to the blurring process has a significant blurring effect.
- the focus area (the focus area is the face area of the toy doll on the left side) is not blurred, or the blurring intensity is small, and can be clearly displayed; the pixel points far from the focus area increase with the distance, and the blurring intensity corresponds accordingly.
- the land is getting bigger and bigger, and the display is getting more and more blurred.
- the depth data and the depth confidence data of the main image and the auxiliary image of the same object are captured by acquiring the dual camera, and the depth data is corrected by the depth confidence data, and the depth data is corrected.
- the denoising process effectively improves the accuracy of the depth data; on the basis of this, the modified depth data is used to blur the main image, which can improve the blurring effect on the main image; and, the blurring process is performed.
- the main image is rendered by blur, so that the main image has obvious blurring effect.
- the image blurring processing method of this embodiment may be performed by a camera, an image processing program, or an intelligent terminal having an imaging function, etc., but it should be apparent to those skilled in the art that in practical applications, any image has a corresponding image.
- the image blurring processing method of the embodiment of the present application can be executed by referring to the embodiment.
- any image blurring processing method provided by the embodiment of the present application may be executed by a processor, such as the processor executing any one of the image blurring processing methods mentioned in the embodiments of the present application by calling corresponding instructions stored in the memory. This will not be repeated below.
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the method includes the steps of the foregoing method embodiments; and the foregoing storage medium includes at least one medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
- FIG. 7 is a logic block diagram of an image blurring processing apparatus according to an embodiment of the present application.
- a “module” or a “unit” in the embodiment of the present application may be a software module or a unit such as a "program module” or a “program unit”, or may be hardware, firmware or software, respectively.
- the modules or units formed by any means of hardware and firmware are not limited in this embodiment; details are not described herein again.
- the image blurring processing apparatus of this embodiment includes a first obtaining module 310, a second obtaining module 320, a correcting module 330, and a blurring module 340.
- the first obtaining module 310 is configured to acquire a main image and a supplementary image obtained by the dual camera capturing the same object.
- the second obtaining module 320 is configured to obtain depth data and depth confidence data according to the main image and the auxiliary image, where the depth data indicates depth values of corresponding pixel points in the main image and the auxiliary image, the depth The confidence data indicates the confidence of each depth value in the depth data.
- the correction module 330 is configured to correct at least one depth value in the depth data according to the depth confidence data.
- the blurring module 340 is configured to perform blurring processing on the main image according to the modified depth data.
- the image blurring processing device of the present embodiment is used to implement the corresponding image blurring processing method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
- FIG. 8 is a logic block diagram of an image blurring processing apparatus according to another embodiment of the present application.
- the second obtaining module 320 includes a first obtaining unit 323, configured to: if the corresponding pixel points in the main map and the auxiliary image have the same depth value, a depth confidence value corresponding to the reference pixel having a larger relative value; and/or, if the depth value of the pixel point in the main image exceeds a preset range, the pixel point exceeding the preset range is given a relatively small relative reference value a depth confidence value; and/or, if the pixel in the main image has two or more depth values, assigning a pixel having two or more depth values to a smaller reference value Depth confidence value.
- the correction module 330 is configured to replace the depth value of the pixel having the lowest depth confidence value with the depth value of the adjacent pixel having the highest depth confidence value.
- the image blurring processing apparatus of the embodiment further includes: a denoising module 350, configured to perform denoising processing on the depth data.
- a denoising module 350 configured to perform denoising processing on the depth data.
- the denoising module 350 includes: a filtering unit 352, configured to perform filtering processing on the depth data by using a filter; and/or an increasing unit 351, configured to pre-define each depth value in the depth data. Increase the ratio.
- the second obtaining module 320 includes: a second acquiring unit 321 configured to perform stereo matching on the main image and the auxiliary image to obtain initial depth data; and a third obtaining unit 322, configured to: The initial depth data is subjected to depth calibration such that the main map and the corresponding pixel of the auxiliary map are at the same depth, and the depth data is obtained.
- the imaginary module 340 includes: a fourth obtaining unit 341, configured to acquire, according to the modified depth data, ambiguous expected data of each first pixel in the main image; and a ambiguous unit 342, configured to: The main image is blurred according to the blurring desired data of each first pixel.
- the blurring unit 342 includes: a generating sub-unit 3421, configured to generate a virtualized map corresponding to a first pixel of the main image and having a pixel value as an initial value.
- a determining sub-unit 3422 configured to respectively determine an initial blurring weight value of the corresponding second pixel point in the blur map according to the blurring expectation data of each first pixel point in the main image; and update the sub-unit 3423, And performing at least one update for the at least one second pixel in the ambiguous graph, where the updating includes: a pixel value according to the first pixel and a current imaginary of the second pixel corresponding to the first pixel
- the weight value is updated, and the current pixel value and the current blur weight value of the at least one adjacent second pixel of the corresponding second pixel are updated;
- the blur subunit 3424 is configured to use the updated blur map Obtaining the result of the blurring process of the main image.
- a distance between the adjacent second pixel point and the corresponding second pixel point satisfies a setting requirement.
- the blurring desired data of the first pixel point includes: a blurring radius; a distance between the adjacent second pixel point and the corresponding second pixel point satisfies a setting requirement, including: The distance between the adjacent second pixel point and the corresponding second pixel point is less than or equal to the blur radius.
- the imagining sub-unit 3424 is configured to: perform, according to the current pixel value of each second pixel point and the current imaginary weight value in the updated ambiguous map, each second in the imaginary map
- the pixel value of the pixel is normalized, and the normalized image is used as the result of the blurring process.
- the fourth obtaining unit 341 includes: a first determining subunit 3411, configured to determine, according to the depth data, a depth difference between each first pixel point in the main image and a predetermined focus point in the main image
- the second determining sub-unit 3412 is configured to respectively determine the blurring expectation data of each first pixel point according to each depth difference value.
- the fourth obtaining unit 341 further includes: an obtaining sub-unit 3413, configured to acquire the input focus information.
- the image blurring processing device of the present embodiment is used to implement the corresponding image blurring processing method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, and details are not described herein again.
- the embodiment of the present application further provides an electronic device, such as a mobile terminal, a personal computer (PC), a tablet computer, a server, and the like.
- an electronic device such as a mobile terminal, a personal computer (PC), a tablet computer, a server, and the like.
- PC personal computer
- FIG. 10 there is shown a block diagram of an electronic device 500 suitable for use in implementing a terminal device or server of an embodiment of the present application.
- electronic device 500 includes one or more processors, communication elements, etc., such as one or more central processing units (CPUs) 501, and/or one or more An image processor (GPU) 513 or the like, the processor may execute various kinds according to executable instructions stored in a read only memory (ROM) 502 or executable instructions loaded from the storage portion 508 into the random access memory (RAM) 503. Proper action and handling.
- the communication component includes a communication component 512 and a communication interface 509.
- the communication component 512 can include, but is not limited to, a network card.
- the network card can include, but is not limited to, an IB (Infiniband) network card.
- the communication interface 509 includes a communication interface of a network interface card such as a LAN card, a modem, etc., and the communication interface 509 is via an Internet interface.
- the network performs communication processing.
- the processor can communicate with the read only memory 502 and/or the random access memory 503 to execute executable instructions, connect to the communication component 512 via the bus 504, and communicate with other target devices via the communication component 512, thereby completing the embodiments of the present application.
- Corresponding operations of any one of the methods for example, acquiring a main image and a secondary image obtained by the dual camera capturing the same object; acquiring depth data and depth confidence data according to the main image and the auxiliary image, the depth data indicating the main a depth value of each corresponding pixel in the map and the auxiliary map, the depth confidence data indicating a confidence level of each depth value in the depth data; and modifying at least one depth in the depth data according to the depth confidence data a value; the main picture is blurred according to the corrected depth data.
- RAM 503 various programs and data required for the operation of the device can be stored.
- the CPU 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
- ROM 502 is an optional module.
- the RAM 503 stores executable instructions, or writes executable instructions to the ROM 502 at runtime, and the executable instructions cause the central processing unit (CPU) 501 to perform operations corresponding to the above-described communication methods.
- An input/output (I/O) interface 505 is also coupled to bus 504.
- the communication component 512 can be integrated or can be configured to have multiple sub-modules (eg, multiple IB network cards) and be on a bus link.
- the following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, etc.; an output portion 507 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 508 including a hard disk or the like. And a communication interface 509 including a network interface card such as a LAN card, a modem, or the like.
- Driver 510 is also coupled to I/O interface 505 as needed.
- a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 510 as needed so that a computer program read therefrom is installed into the storage portion 508 as needed.
- FIG. 10 is only an optional implementation manner.
- the number and types of components in the foregoing FIG. 10 may be selected, deleted, added, or replaced according to actual needs;
- the function component setting may also adopt an implementation such as a separate setting or an integrated setting.
- the GPU 513 and the CPU 501 may be separately configured or the GPU 513 may be integrated on the CPU 501, and the communication component may be separately disposed on the CPU 501 or the GPU 513. and many more.
- embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart, the program code comprising the corresponding execution
- the method provided by the embodiment of the present application operates a corresponding instruction, for example, acquiring a main image and a secondary image obtained by capturing a same object by a dual camera; acquiring depth data and depth confidence data according to the main image and the auxiliary image, the depth data indication a depth value of each corresponding pixel point in the main image and the auxiliary image, the depth confidence data indicating a confidence level of each depth value in the depth data; and modifying the depth data according to the depth confidence data At least one depth value; the main picture is blurred according to the corrected depth data.
- the computer program can be downloaded and installed from the network via a communication component, and/or installed from the removable media 511.
- the computer program is executed by the central processing unit (CPU) 501, the above-described functions defined in the method of the embodiment of the present application are executed.
- the methods, apparatus, and apparatus of the present application may be implemented in a number of ways.
- the methods, apparatus, and apparatus of the present application can be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware.
- the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present application are not limited to the order described above unless otherwise specifically stated.
- the present application can also be implemented as a program recorded in a recording medium, the programs including machine readable instructions for implementing the method according to the present application.
- the present application also covers a recording medium storing a program for executing the method according to the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
本申请实施例提供一种图像虚化处理方法、装置、存储介质及电子设备。其中,所述图像虚化处理方法包括:获取双摄像头拍摄同一对象获得的主图和辅图;根据所述主图和辅图获取深度数据和深度置信度数据,所述深度数据指示所述主图和所述辅图中各对应像素点的深度值,所述深度置信度数据指示所述深度数据中各深度值的置信度;根据所述深度置信度数据修正所述深度数据中至少一个深度值;根据修正后的深度数据对所述主图进行虚化处理。采用本申请的技术方案,可以有效提高获取的深度数据的准确性,进而提高对主图的虚化效果。
Description
本申请要求在2017年05月19日提交中国专利局、申请号为CN 201710359299.3、发明名称为“图像虚化处理方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及图像处理技术,尤其涉及一种图像虚化处理方法、装置、存储介质及电子设备。
图像的背景虚化能够使拍摄主体清晰显示,深受摄影爱好者的喜爱。目前,图像的虚化效果主要是利用光学成像原理,在硬件上采用大的镜头光圈来实现,因此,图像的虚化功能主要集成在单反相机等专业的摄像机上。随着智能手机的不断普及,使用手机进行拍照的用户占大多数,而由于手机厚度受限,手机只能安装小光圈镜头,所以手机只能在很近的情况下生成微弱的虚化效果,在其他场景下不能生成具有虚化效果的图像。
发明内容
本申请实施例提供一种图像虚化处理技术方案。
根据本申请实施例的一个方面,提供一种图像虚化处理方法,包括:获取双摄像头拍摄同一对象获得的主图和辅图;根据所述主图和辅图获取深度数据和深度置信度数据,所述深度数据指示所述主图和所述辅图中各对应像素点的深度值,所述深度置信度数据指示所述深度数据中各深度值的置信度;根据所述深度置信度数据修正所述深度数据中至少一个深度值;根据修正后的深度数据对所述主图进行虚化处理。
可选地,所述根据所述主图和辅图获得深度置信度数据,包括:如果在所述主图和所述辅图中的对应像素点具有相同深度值,则赋予所述对应像素点相对参考值较大的深度置信度值;和/或,如果所述主图中像素点的深度值超出预设范围,则赋予超出预设范围的像素点相对参考值较小的深度置信度值;和/或,如果所述主图中像素点具有两个或两个以上的深度值,则赋予具有两个或两个以上的深度值的像素点相对参考值较小的深度置信度值。
可选地,所述根据所述深度置信度数据修正所述深度数据中至少一个深度值,包括:以具有最高深度置信度值的相邻像素点的深度值,替换具有最低深度置信度值的像素点的深度值。
可选地,在所述根据修正后的深度数据对所述主图进行虚化处理之前,还包括:对所述深度数据进行去噪处理。
可选地,所述去噪处理包括:采用滤波器对深度数据进行滤波处理;和/或,将所述深度数据中各深度值按照预设比例增大。
可选地,所述根据所述主图和辅图获取深度数据,包括:对所述主图和所述辅图进行立体匹配获得初始深度数据;对所述初始深度数据进行深度校准以使所述主图和所述辅图对应像素点位于同一深度,获得所述深度数据。
可选地,所述根据修正后的深度数据对所述主图进行虚化处理,包括:根据修正后的深度数据获取所述主图中各第一像素点的虚化期望数据;根据各第一像素点的虚化期望数据对所述主图进行虚化处理。
可选地,所述根据各第一像素点的虚化期望数据对所述主图进行虚化处理,包括:生成所述主图的第一像素点对应且像素值为初始值的虚化图;根据所述主图中各第一像素点的虚化期望数据分别确定所述虚化图中相应的第二像素点的初始虚化权重值;针对上述虚化图中的至少一个第二像素点进行至少一次更新,所述更新包括:根据第一像素点的像素值和与所述第一像素点对应的第二像素点的当前虚化权重值,更新所述对应的第二像素点的至少一个邻近的第二像素点的当前像素值和当前虚化权重值;根据更新后的所述虚化图获得所述主图的虚化处理结果。
可选地,所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求。
可选地,所述第一像素点的虚化期望数据包括:虚化半径;所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求,包括:所述邻近的第二像素点与所述对应的第二像素点之间的距离小于或等于所述虚化半径。
可选地,所述根据更新后的所述虚化图获得所述主图的虚化处理结果,包括:根据更新后的所述虚化图中各第二像素点的当前像素值和当前虚化权重值,对所述虚化图中的各第二像素点的像素值进行归一化处理,将归一化处理后的所述虚化图作为所述虚化处理结果。
可选地,所述根据修正后的深度数据获取所述主图中各第一像素点的虚化期望数据,包括:根据所述深度数据确定所述主图中各第一像素点与所述主图中预定对焦点的深度差值;根据各深度差值分别确定各第一像素点的虚化期望数据。
可选地,在所述根据修正后的深度数据获取所述主图中各第一像素点的虚化期望数据之前,还包括:
获取输入的对焦点信息。
根据本申请实施例的另一个方面,还提供一种图像虚化处理装置,包括:第一获取模块,用于获取双摄像头拍摄同一对象获得的主图和辅图;第二获取模块,用于根据所述主图和辅图获取深度数据和深度置信度数据,所述深度数据指示所述主图和所述辅图中各对应像素点的深度值,所述深度置信度数据指示所述深度数据中各深度值的置信度;修正模块,用于根据所述深度置信度数据修正所述深度数据中至少一个深度值;虚化模块,用于根据修正后的深度数据对所述主图进行虚化处理。
可选地,所述第二获取模块包括第一获取单元,用于如果在所述主图和所述辅 图中的对应像素点具有相同深度值,则赋予所述对应像素点相对参考值较大的深度置信度值;和/或,如果所述主图中像素点的深度值超出预设范围,则赋予超出预设范围的像素点相对参考值较小的深度置信度值;和/或,如果所述主图中像素点具有两个或两个以上的深度值,则赋予具有两个或两个以上的深度值的像素点相对参考值较小的深度置信度值。
可选地,所述修正模块用于以具有最高深度置信度值的相邻像素点的深度值,替换具有最低深度置信度值的像素点的深度值。
可选地,还包括:去噪模块,用于对所述深度数据进行去噪处理。
可选地,所述去噪模块包括:滤波单元,用于采用滤波器对深度数据进行滤波处理;和/或,增大单元,用于将所述深度数据中各深度值按照预设比例增大。
可选地,所述第二获取模块包括:第二获取单元,用于对所述主图和所述辅图进行立体匹配获得初始深度数据;第三获取单元,用于对所述初始深度数据进行深度校准以使所述主图和所述辅图对应像素点位于同一深度,获得所述深度数据。
可选地,所述虚化模块包括:第四获取单元,用于根据修正后的深度数据获取所述主图中各第一像素点的虚化期望数据;虚化单元,用于根据各第一像素点的虚化期望数据对所述主图进行虚化处理。
可选地,所述虚化单元包括:生成子单元,用于生成所述主图的第一像素点对应且像素值为初始值的虚化图;确定子单元,用于根据所述主图中各第一像素点的虚化期望数据分别确定所述虚化图中相应的第二像素点的初始虚化权重值;更新子单元,用于针对上述虚化图中的至少一个第二像素点进行至少一次更新,所述更新包括:根据第一像素点的像素值和与所述第一像素点对应的第二像素点的当前虚化权重值,更新所述对应的第二像素点的至少一个邻近的第二像素点的当前像素值和当前虚化权重值;虚化子单元,用于根据更新后的所述虚化图获得所述主图的虚化处理结果。
可选地,所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求。
可选地,所述第一像素点的虚化期望数据包括:虚化半径;所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求,包括:所述邻近的第二像素点与所述对应的第二像素点之间的距离小于或等于所述虚化半径。
可选地,所述虚化子单元用于根据更新后的所述虚化图中各第二像素点的当前像素值和当前虚化权重值,对所述虚化图中的各第二像素点的像素值进行归一化处理,将归一化处理后的所述虚化图作为所述虚化处理结果。
可选地,所述第四获取单元包括:第一确定子单元,用于根据所述深度数据确定所述主图中各第一像素点与所述主图中预定对焦点的深度差值;第二确定子单元,用于根据各深度差值分别确定各第一像素点的虚化期望数据。
可选地,所述第四获取单元还包括:获取子单元,用于获取输入的对焦点信息。
根据本申请实施例的又一个方面,还提供一种存储介质,所述存储介质存储有 至少一可执行指令,所述可执行指令适于由处理器加载并执行如上述任一所述的图像虚化处理方法对应的操作。
根据本申请实施例的再一个方面,还提供一种电子设备,所述电子设备包括:处理器和存储器;所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如上述任一所述的图像虚化处理方法对应的操作。
根据本申请实施例的再一个方面,还提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现如上述任一所述的图像虚化处理方法的指令。
根据本申请实施例的图像虚化处理方法、装置、存储介质及电子设备,通过获取双摄像头拍摄同一对象的主图和辅图的深度数据以及深度置信度数据,并通过深度置信度数据来修正深度数据,有效提高了深度数据的准确性,在此基础上,通过修正后的深度数据来对主图进行虚化处理,可以提高对主图的虚化效果。
下面通过附图和实施例,对本申请的技术方案做进一步的详细描述。
构成说明书的一部分的附图描述了本申请的实施例,并且连同描述一起用于解释本申请的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:
图1是根据本申请一个实施例的图像虚化处理方法的流程图;
图2是根据本申请另一个实施例的图像虚化处理方法的流程图;
图3是根据本申请另一个实施例提供的双摄像头拍摄的主图;
图4是根据本申请另一个实施例提供的双摄像头拍摄的辅图;
图5是根据本申请另一个实施例提供的主图的深度图;
图6是根据本申请另一个实施例提供的经过虚化处理的主图;
图7是根据本申请一个实施例的图像虚化处理装置的逻辑框图;
图8是根据本申请另一个实施例的图像虚化处理装置的逻辑框图;
图9是根据本申请另一个实施例的图像虚化处理装置的虚化模块的逻辑框图;
图10是根据本申请一个实施例的电子设备的结构示意图。
下面结合附图(若干附图中相同的标号表示相同的元素)和实施例,对本申请实施例的实施方式作进一步详细说明。以下实施例用于说明本申请,但不用来限制本申请的范围。
本领域技术人员可以理解,本申请实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对各示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本申请实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
图1是根据本申请一个实施例的图像虚化处理方法的流程图。
参照图1,在步骤S110,获取双摄像头拍摄同一对象获得的主图和辅图。
双摄像头可以在不同角度拍摄同一场景获得两张图片,分别为主图和辅图(或者左图和右图),两张图片中哪张为主图以及哪张为辅图,由双摄像头在出厂之前预先设定好的方式来确定。其中,双摄像头可以设置在受限于厚度而不能集成大光圈镜头的移动智能终端上,例如,智能手机上的双摄像头。
在双摄像头拍摄同一对象获得的主图和辅图中,主图为最终呈献给用户的图片。本申请实施例的图像虚化处理方法,对双摄像头拍摄的主图进行虚化处理,以提高对主图的虚化效果。
在一个可选示例中,该操作S110可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取模块310执行。
在步骤S120,根据主图和辅图获取深度数据和深度置信度数据。其中,深度数据指示主图和辅图中各对应像素点的深度值,深度置信度数据指示深度数据中各深度值的置信度。
深度置信度数据指示深度数据中各深度值的置信度,能够表示深度数据的准确性,也即,通过主图和辅图的深度置信度数据,可以分别表示获取的主图和辅图中各像素点的深度值的准确性。其中,深度值为拍摄的图片(主图或辅图)中的像素点对应的拍摄对象距离摄像头的距离。
本实施例对获取深度数据和深度置信度数据的方式不做限定。例如,在获取深度数据时,可通过对主图和辅图进行立体匹配,或者采用其他图像处理技术以及采用深度神经网络处理主图和辅图,来获取主图和辅图的深度数据,但不限于此。
在一个可选示例中,该操作S120可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二获取模块320执行。
在步骤S130,根据深度置信度数据修正深度数据中至少一个深度值。
例如,根据主图的深度置信度数据,修正主图的深度数据中置信度较低的深度值,以使主图的深度数据所指示的主图中各像素点的深度值更加准确。
在一个可选示例中,该操作S130可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的修正模块330执行。
在步骤S140,根据修正后的深度数据对主图进行虚化处理。
可选地,根据主图的深度置信度数据修正主图的深度数据,并根据修正后的主图的深度数据来计算用于进行虚化渲染的拟定虚化数据,对主图中的部分区域进行模糊处理,或者调整主图中部分像素点的像素值,从而完成对主图的虚化渲染。由于主图的深度数据经过深度置信度数据的修正,可以更加准确地指示主图中各像素点的像素值,进一步地,根据修正后的深度数据来进行虚化处理,可以有效地提高对主图的虚化效果,从而解决双摄手机拍摄的图像没有虚化效果或者具有微弱虚化效果的问题。
在一个可选示例中,该操作S140可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的虚化模块340执行。
根据本申请实施例的图像虚化处理方法,通过获取双摄像头拍摄同一对象的主图和辅图的深度数据以及深度置信度数据,并通过深度置信度数据来修正深度数据,有效提高了深度数据的准确性,在此基础上,通过修正后的深度数据来对主图进行虚化处理,可以提高对主图的虚化效果。
在实际应用中,本实施例的图像虚化处理方法可以由摄像机、图像处理程序或者具有摄像功能的智能终端等来执行,但本领域技术人员应明了,在实际应用中,任意具有相应的图像处理和数据处理功能的设备,均可以参照本实施例来执行本申请实施例的图像虚化处理方法。
图2是根据本申请另一个实施例的图像虚化处理方法的流程图。
参照图2,在步骤S210,获取双摄像头拍摄同一对象获得的主图和辅图。
例如,图3和图4的本实施例获取的主图和辅图。主图和辅图为双摄像头在不同角度拍摄同一场景得到的两张图片,对比图3和图4可知,主图和辅图中靠近图片边缘的玩具娃娃耳朵的位置不同(距离桌面上的鼠标垫的位置不同)。
在一个可选示例中,该操作S210可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一获取模块310执行。
在步骤S220,根据主图和辅图获取深度数据和深度置信度数据。其中,深度数据指示主图和辅图中各对应像素点的深度值,深度置信度数据指示深度数据中各深 度值的置信度。
在一种可选的实施方式中,通过对主图和辅图进行立体匹配获得初始深度数据,进一步地,对初始深度数据进行深度校准,以使主图和辅图对应像素点位于同一深度,进而获取校准后的主图和辅图的深度数据。这里,采用立体匹配的方式可以快速且准确地获取初始深度数据。对初始深度数据进行校准,可以在双摄像头因碰撞等因素导致双摄像头发生轻微的位移或转动,造成主图和辅图对应像素点没有位于同一深度的情况下,使得主图和辅图对应像素点位于同一深度,避免影响后续的图像处理操作。
在本实施例中,在获得主图和辅图的深度数据后,还获取主图的深度置信度数据。例如,如果在主图和辅图中的对应像素点具有相同深度值,则赋予对应像素点的深度值相对参考值较大的深度置信度值;如果在主图和辅图中的对应像素点具有不同深度值,则赋予对应像素点的深度值相对参考值较小的深度置信度值。和/或,如果主图中像素点的深度值超出预设范围,则赋予超出预设范围的像素点的深度值相对参考值较小的深度置信度值;如果主图中像素点的深度值未超出预设范围,则赋予对应像素点的深度值相对参考值较大的深度置信度值。和/或,如果主图中像素点具有两个或两个以上的深度值,则赋予具有两个或两个以上的深度值的像素点的深度值相对参考值较小的深度置信度值;如果主图中像素点具有同一个深度值,则赋予对应像素点的深度值相对参考值较大的深度置信度值。
可选地,深度数据和深度置信度数据分别为深度图和置信度图。例如,参照图5的主图的深度图,深度图中每一个像素点的值代表主图中对应的第一像素点的深度值。主图对应的深度置信度图(图中未示出)中每一个像素点的值则代表对应的第一像素点的深度值的置信度。这里,主图的深度图和置信度图的大小可以和主图的大小相同。
在一个可选示例中,该操作S220可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二获取模块320执行。
在步骤S230,根据深度置信度数据修正深度数据中至少一个深度值,并对深度数据进行去噪处理。
可选地,在根据主图对应的深度置信度数据修正主图的深度数据时,以具有最高深度置信度值的相邻像素点的深度值,替换具有最低深度置信度值的像素点的深度值,以避免为主图中各个像素点确定的深度值出现较大误差,使得深度数据所指示的深度值更加准确,提高深度数据的准确度。
此外,为了进一步提高获取的深度数据的准确性,还可以对深度数据进行去噪处理。可选地,去噪处理可以包括采用滤波器对深度数据进行滤波处理,和/或,将深度数据中各深度值按照预设比例增大。例如,采用平滑滤波器,使得主图中颜色相似的像素点具有相似的深度值,进一步提高深度数据的准确度。以及,对深度数据中各深度值进行拉伸处理,将深度数据中各深度值按照预设比例增大,以增大各个像素点的深度值之间的对比度。
在一个可选示例中,该操作S230可以由处理器调用存储器存储的相应指令执行,也 可以由被处理器运行的修正模块330和去噪模块350执行。
在步骤S240,根据深度数据确定主图中各第一像素点与主图中预定对焦点的深度差值。
在本实施例中,在执行该步骤之前,通过输入的方式获取主图的对焦点信息。可选地,在对拍摄得到的主图进行虚化处理时,用户可以在主图中选择一个点或区域进行点击操作,或者输入主图中的一个点或区域的坐标等数据,以该点或区域作为主图的对焦点或对焦区域。例如,主图中包括人和车辆,用户可以点击人作为对焦点,通过执行本实施例的图像虚化处理方法,来使主图中的人显示得更为清晰,并使主图中的车辆及其他背景区域显示得较为模糊。
当然,在其他实施例中,针对用户拍摄主图时已经选择好对焦点的情形,执行该步骤,也可以直接获取主图中已经确定的对焦点的信息。其中,用户选择对焦点可以为用户拍摄主图时,摄像头进行自动对焦选择的对焦点。
根据获取的对焦点信息,确定主图中的预定对焦点,并根据去噪后的深度数据中获取主图中各第一像素点以及预定对焦点的深度值,计算各第一像素点与预定对焦点的深度值的差值。
在一个可选示例中,该操作S240可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的虚化模块340中的第四获取单元341执行。
在步骤S250,根据各深度差值分别确定各第一像素点的虚化期望数据。
本实施例中,根据主图中各第一像素点与预定对焦点的深度差值,来计算各第一像素点的虚化期望数据,用于指示对主图中各第一像素点进行虚化处理所期望或者拟达到的虚化程度。这里,虚化期望数据包括但不限于虚化半径或直径等径长,虚化径长可包括但不限于虚化后像素的弥散圆的半径或直径等信息。
可选地,第一像素点的虚化期望数据包括虚化半径。例如,通过公式:c=A*abs(d
0-d)来计算第一像素点的虚化半径c,其中,abs为求取绝对值函数,A为模拟的大光圈镜头的光圈大小,d
0为预定对焦点的深度值,d为第一像素点的深度值。
当d等于d
0时,该第一像素点与预定对焦点处于同一深度,虚化半径c=0,该第一像素点不需要进行虚化处理。当d不等于d
0时,该第一像素点远离预定对焦点,且距离越近,虚化半径c越小;距离越远,虚化半径c越大。也就是说,在主图中,预定对焦点不进行虚化处理;预定对焦点附近的对焦区域在进行虚化处理时,虚化力度较小;远离预定对焦点的区域,在进行虚化处理时的虚化力度较大,且距离越远,虚化力度越大。
在一个可选示例中,该操作S250可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的虚化模块340中的第四获取单元341执行。
在步骤S260,根据各第一像素点的虚化期望数据对主图进行虚化处理。
在一种可选地实施方式中,根据获取的虚化期望数据对主图进行虚化渲染的方法包括:生成主图的第一像素点对应且像素值为初始值的虚化图;根据主图中各第一像素点的虚化期望数据分别确定虚化图中相应的第二像素点的初始虚化权重值; 针对上述虚化图中的至少一个第二像素点进行至少一次更新,更新包括:根据第一像素点的像素值和与第一像素点对应的第二像素点的当前虚化权重值,更新对应的第二像素点的至少一个邻近的第二像素点的当前像素值和当前虚化权重值;根据更新后的虚化图获得主图的虚化处理结果。
可选地,在生成上述虚化图时,生成与主图大小相同,且像素点与主图中的各第一像素点一一对应的虚化图,并将虚化图中的各第二像素点的像素值初始化为0(或者某一相同数值)。这里,由于主图和虚化图大小相等,且第一像素点和第二像素点一一对应,因此,第一像素点和第二像素点均可以用坐标(x,y)来表示。在这里需要说明的是,在实际应用中,也可以先生成主图的虚化图,再执行步骤S210至S250来获取主图的虚化期望数据。
在本实施例中,根据虚化期望数据来获取虚化图中各第二像素点的初始虚化权重值,用于以通过模拟具有大光圈的镜头(例如单反相机)成像时的虚化过程,来对主图进行虚化渲染。可选地,虚化期望数据包括虚化半径,在获取上述初始虚化权重值时,可以根据公式:w(x,y)=1/c(x,y)
2,分别为虚化图中的各第二像素点(x,y)确定各自的初始虚化权重值w(x,y)。其中,c(x,y)为第一像素点(x,y)的虚化半径。也即,第一像素点的虚化半径越大,相应的第二像素点的初始虚化权重值越小。
可选地,上述邻近的第二像素点与对应的第二像素点之间的距离满足设定要求。例如,该设定要求为小于或等于虚化半径,也即,第一像素点的虚化半径大于对应的第二像素点与邻近的第二像素点之间的距离。
在对虚化图中的第二像素点进行更新时,对针对虚化图中的每一个第二像素点(x,y),对多个邻近的第二像素点(x’,y’)均进行散射操作,来更新当前像素值I(x’,y’)和当前虚化权重值w(x’,y’)。例如,通过在I(x’,y’)的基础上累加I(x’,y’)*w(x,y)来获取新的I(x’,y’),对当前像素值进行一次更新;通过在w(x’,y’)的基础上累加w(x,y)来获取新的w(x’,y’),对当前虚化权重值进行一次更新。
通过对各第二像素点的当前像素值和当期虚化权重值进行不断更新来更新虚化图,直至所有第二像素点完成更新。
可选地,根据更新后的虚化图中各第二像素点的当前像素值和当前虚化权重值对虚化图中的各第二像素点的像素值进行归一化处理,将归一化处理后的虚化图作为虚化处理结果。
在本实施例中,根据更新后各第二像素点的当前像素值和当前虚化权重值,对各第二像素点的当前像素值进行归一化处理,来获取各第二像素点的像素值。也即,第二像素点的像素值为更新后的当前像素值和当前虚化权重值的比值。根据获取的各像素值确定为虚化图中各第二像素点的像素值,并将处理后的虚化图确定为主图的虚化处理结果。
在一个可选示例中,该操作S260可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的虚化模块340中的虚化单元执行。
参照图6的经过虚化处理的主图,经过虚化处理的主图具有明显的虚化效果。 图中对焦区域(对焦区域为左侧的玩具娃娃的脸部区域)没有经过虚化处理,或者虚化力度较小,能够清晰显示;远离对焦区域的像素点随距离的增加,虚化力度相应地越来越大,显示得越来越模糊。
根据本申请实施例的图像虚化处理方法,通过获取双摄像头拍摄同一对象的主图和辅图的深度数据以及深度置信度数据,并通过深度置信度数据来修正深度数据,以及对深度数据进行去噪处理,有效提高了深度数据的准确性;在此基础上,通过修正后的深度数据来对主图进行虚化处理,可以提高对主图的虚化效果;并且,在进行虚化处理时,通过模拟大光圈镜头的虚化过程,来对主图进行虚化渲染,使得主图具有明显的虚化效果。
在实际应用中,本实施例的图像虚化处理方法可以由摄像机、图像处理程序或者具有摄像功能的智能终端等来执行,但本领域技术人员应明了,在实际应用中,任意具有相应的图像处理和数据处理功能的设备,均可以参照本实施例来执行本申请实施例的图像虚化处理方法。
或者,本申请实施例提供的任一种图像虚化处理方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本申请实施例提及的任一种图像虚化处理方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等至少一个种可以存储程序代码的介质。
图7是根据本申请一个实施例的图像虚化处理装置的逻辑框图。本领域技术人员可以理解,本申请实施例中的“模块”或“单元”,可以分别是如“程序模块”或“程序单元”等软件模块或单元,也可以分别是硬件、固件或者软件、硬件、固件的任何方式形成的模块或单元,本申请实施例对此不做限定;不再赘述。
参照图7,本实施例的图像虚化处理装置包括第一获取模块310、第二获取模块320、修正模块330和虚化模块340。
第一获取模块310用于获取双摄像头拍摄同一对象获得的主图和辅图。第二获取模块320用于根据所述主图和辅图获取深度数据和深度置信度数据,所述深度数据指示所述主图和所述辅图中各对应像素点的深度值,所述深度置信度数据指示所述深度数据中各深度值的置信度。修正模块330用于根据所述深度置信度数据修正所述深度数据中至少一个深度值。虚化模块340用于根据修正后的深度数据对所述主图进行虚化处理。
本实施例的图像虚化处理装置用于实现前述方法实施例中相应的图像虚化处理方法,并具有相应的方法实施例的有益效果,在此不再赘述。
图8是根据本申请另一个实施例的图像虚化处理装置的逻辑框图。
根据本实施例的图像虚化处理装置,所述第二获取模块320包括第一获取单元323,用于如果在所述主图和所述辅图中的对应像素点具有相同深度值,则赋予所 述对应像素点相对参考值较大的深度置信度值;和/或,如果所述主图中像素点的深度值超出预设范围,则赋予超出预设范围的像素点相对参考值较小的深度置信度值;和/或,如果所述主图中像素点具有两个或两个以上的深度值,则赋予具有两个或两个以上的深度值的像素点相对参考值较小的深度置信度值。
可选地,所述修正模块330用于以具有最高深度置信度值的相邻像素点的深度值,替换具有最低深度置信度值的像素点的深度值。
可选地,本实施例的图像虚化处理装置还包括:去噪模块350,用于对所述深度数据进行去噪处理。
可选地,所述去噪模块350包括:滤波单元352,用于采用滤波器对深度数据进行滤波处理;和/或,增大单元351,用于将所述深度数据中各深度值按照预设比例增大。
可选地,所述第二获取模块320包括:第二获取单元321,用于对所述主图和所述辅图进行立体匹配获得初始深度数据;第三获取单元322,用于对所述初始深度数据进行深度校准以使所述主图和所述辅图对应像素点位于同一深度,获得所述深度数据。
可选地,所述虚化模块340包括:第四获取单元341,用于根据修正后的深度数据获取所述主图中各第一像素点的虚化期望数据;虚化单元342,用于根据各第一像素点的虚化期望数据对所述主图进行虚化处理。
在一种可选的实施方式中,参照图9,所述虚化单元342包括:生成子单元3421,用于生成所述主图的第一像素点对应且像素值为初始值的虚化图;确定子单元3422,用于根据所述主图中各第一像素点的虚化期望数据分别确定所述虚化图中相应的第二像素点的初始虚化权重值;更新子单元3423,用于针对上述虚化图中的至少一个第二像素点进行至少一次更新,所述更新包括:根据第一像素点的像素值和与所述第一像素点对应的第二像素点的当前虚化权重值,更新所述对应的第二像素点的至少一个邻近的第二像素点的当前像素值和当前虚化权重值;虚化子单元3424,用于根据更新后的所述虚化图获得所述主图的虚化处理结果。
可选地,所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求。
可选地,所述第一像素点的虚化期望数据包括:虚化半径;所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求,包括:所述邻近的第二像素点与所述对应的第二像素点之间的距离小于或等于所述虚化半径。
可选地,所述虚化子单元3424用于根据更新后的所述虚化图中各第二像素点的当前像素值和当前虚化权重值,对所述虚化图中的各第二像素点的像素值进行归一化处理,将归一化处理后的所述虚化图作为所述虚化处理结果。
可选地,所述第四获取单元341包括:第一确定子单元3411,用于根据所述深度数据确定所述主图中各第一像素点与所述主图中预定对焦点的深度差值;第二确定子单元3412,用于根据各深度差值分别确定各第一像素点的虚化期望数据。
可选地,所述第四获取单元341还包括:获取子单元3413,用于获取输入的对焦点信息。
本实施例的图像虚化处理装置用于实现前述方法实施例中相应的图像虚化处理方法,并具有相应的方法实施例的有益效果,在此不再赘述。
本申请实施例还提供了一种电子设备,例如可以是移动终端、个人计算机(PC)、平板电脑、服务器等。下面参考图10,其示出了适于用来实现本申请实施例的终端设备或服务器的电子设备500的结构示意图。
如图10所示,电子设备500包括一个或多个处理器、通信元件等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU)501,和/或一个或多个图像处理器(GPU)513等,处理器可以根据存储在只读存储器(ROM)502中的可执行指令或者从存储部分508加载到随机访问存储器(RAM)503中的可执行指令而执行各种适当的动作和处理。通信元件包括通信组件512和通信接口509。其中,通信组件512可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,通信接口509包括诸如LAN卡、调制解调器等的网络接口卡的通信接口,通信接口509经由诸如因特网的网络执行通信处理。
处理器可与只读存储器502和/或随机访问存储器503中通信以执行可执行指令,通过总线504与通信组件512相连、并经通信组件512与其他目标设备通信,从而完成本申请实施例提供的任一项方法对应的操作,例如,获取双摄像头拍摄同一对象获得的主图和辅图;根据所述主图和辅图获取深度数据和深度置信度数据,所述深度数据指示所述主图和所述辅图中各对应像素点的深度值,所述深度置信度数据指示所述深度数据中各深度值的置信度;根据所述深度置信度数据修正所述深度数据中至少一个深度值;根据修正后的深度数据对所述主图进行虚化处理。
此外,在RAM 503中,还可存储有装置操作所需的各种程序和数据。CPU501、ROM502以及RAM503通过总线504彼此相连。在有RAM503的情况下,ROM502为可选模块。RAM503存储可执行指令,或在运行时向ROM502中写入可执行指令,可执行指令使中央处理单元(CPU)501执行上述通信方法对应的操作。输入/输出(I/O)接口505也连接至总线504。通信组件512可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在总线链接上。
以下部件连接至I/O接口505:包括键盘、鼠标等的输入部分506;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分507;包括硬盘等的存储部分508;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信接口509。驱动器510也根据需要连接至I/O接口505。可拆卸介质511,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器510上,以便于从其上读出的计算机程序根据需要被安装入存储部分508。
需要说明的,如图10所示的架构仅为一种可选实现方式,在实践过程中,可根据实际需要对上述图10的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如GPU513和CPU501可分离设置或者可将GPU513集成在CPU501上,通信组件可512分离设 置,也可集成设置在CPU501或GPU513上,等等。这些可替换的实施方式均落入本申请的保护范围。
特别地,根据本申请实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本申请实施例提供的方法操作对应的指令,例如,获取双摄像头拍摄同一对象获得的主图和辅图;根据所述主图和辅图获取深度数据和深度置信度数据,所述深度数据指示所述主图和所述辅图中各对应像素点的深度值,所述深度置信度数据指示所述深度数据中各深度值的置信度;根据所述深度置信度数据修正所述深度数据中至少一个深度值;根据修正后的深度数据对所述主图进行虚化处理。在这样的实施例中,该计算机程序可以通过通信元件从网络上被下载和安装,和/或从可拆卸介质511被安装。在该计算机程序被中央处理单元(CPU)501执行时,执行本申请实施例的方法中限定的上述功能。
本说明书中各个实施例均采用递进的方式描述,每一个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本申请的方法和装置、设备。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本申请的方法和装置、设备。用于方法的步骤的上述顺序仅是为了进行说明,本申请的方法的步骤不限于以上描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本申请实施为记录在记录介质中的程序,这些程序包括用于实现根据本申请的方法的机器可读指令。因而,本申请还覆盖存储用于执行根据本申请的方法的程序的记录介质。
本申请的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本申请限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本申请的原理和实际应用,并且使本领域的普通技术人员能够理解本申请从而设计适于特定用途的带有各种修改的各种实施例。
以上所述,仅为本申请实施例的实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。
Claims (29)
- 一种图像虚化处理方法,包括:获取双摄像头拍摄同一对象获得的主图和辅图;根据所述主图和辅图获取深度数据和深度置信度数据,所述深度数据指示所述主图和所述辅图中各对应像素点的深度值,所述深度置信度数据指示所述深度数据中各深度值的置信度;根据所述深度置信度数据修正所述深度数据中至少一个深度值;根据修正后的深度数据对所述主图进行虚化处理。
- 根据权利要求1所述的方法,其中,所述根据所述主图和辅图获得深度置信度数据,包括:如果在所述主图和所述辅图中的对应像素点具有相同深度值,则赋予所述对应像素点相对参考值较大的深度置信度值;和/或,如果所述主图中像素点的深度值超出预设范围,则赋予超出预设范围的像素点相对参考值较小的深度置信度值;和/或,如果所述主图中像素点具有两个或两个以上的深度值,则赋予具有两个或两个以上的深度值的像素点相对参考值较小的深度置信度值。
- 根据权利要求1或2所述的方法,其中,所述根据所述深度置信度数据修正所述深度数据中至少一个深度值,包括:以具有最高深度置信度值的相邻像素点的深度值,替换具有最低深度置信度值的像素点的深度值。
- 根据权利要求1-3中任一项所述的方法,其中,在所述根据修正后的深度数据对所述主图进行虚化处理之前,还包括:对所述深度数据进行去噪处理。
- 根据权利要求4述的方法,其中,所述去噪处理包括:采用滤波器对深度数据进行滤波处理;和/或,将所述深度数据中各深度值按照预设比例增大。
- 根据权利要求1-5中任一项所述的方法,其中,所述根据所述主图和辅图获取深度数据,包括:对所述主图和所述辅图进行立体匹配获得初始深度数据;对所述初始深度数据进行深度校准以使所述主图和所述辅图对应像素点位于同一深度,获得所述深度数据。
- 根据权利要求1-6中任一项所述的方法,其中,所述根据修正后的深度数据对所述主图进行虚化处理,包括:根据修正后的深度数据获取所述主图中各第一像素点的虚化期望数据;根据各第一像素点的虚化期望数据对所述主图进行虚化处理。
- 根据权利要求7所述的方法,其中,所述根据各第一像素点的虚化期望数据对所述主图进行虚化处理,包括:生成所述主图的第一像素点对应且像素值为初始值的虚化图;根据所述主图中各第一像素点的虚化期望数据分别确定所述虚化图中相应的 第二像素点的初始虚化权重值;针对上述虚化图中的至少一个第二像素点进行至少一次更新,所述更新包括:根据第一像素点的像素值和与所述第一像素点对应的第二像素点的当前虚化权重值,更新所述对应的第二像素点的至少一个邻近的第二像素点的当前像素值和当前虚化权重值;根据更新后的所述虚化图获得所述主图的虚化处理结果。
- 根据权利要求8所述的方法,其中,所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求。
- 根据权利要求9所述的方法,其中,所述第一像素点的虚化期望数据包括:虚化半径;所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求,包括:所述邻近的第二像素点与所述对应的第二像素点之间的距离小于或等于所述虚化半径。
- 根据权利要求8-10中任一项所述的方法,其中,所述根据更新后的所述虚化图获得所述主图的虚化处理结果,包括:根据更新后的所述虚化图中各第二像素点的当前像素值和当前虚化权重值,对所述虚化图中的各第二像素点的像素值进行归一化处理,将归一化处理后的所述虚化图作为所述虚化处理结果。
- 根据权利要求7-11中任一项所述的方法,其中,所述根据修正后的深度数据获取所述主图中各第一像素点的虚化期望数据,包括:根据所述深度数据确定所述主图中各第一像素点与所述主图中预定对焦点的深度差值;根据各深度差值分别确定各第一像素点的虚化期望数据。
- 根据权利要求12所述的方法,其中,在所述根据修正后的深度数据获取所述主图中各第一像素点的虚化期望数据之前,还包括:获取输入的对焦点信息。
- 一种图像虚化处理装置,包括:第一获取模块,用于获取双摄像头拍摄同一对象获得的主图和辅图;第二获取模块,用于根据所述主图和辅图获取深度数据和深度置信度数据,所述深度数据指示所述主图和所述辅图中各对应像素点的深度值,所述深度置信度数据指示所述深度数据中各深度值的置信度;修正模块,用于根据所述深度置信度数据修正所述深度数据中至少一个深度值;虚化模块,用于根据修正后的深度数据对所述主图进行虚化处理。
- 根据权利要求14所述的装置,其中,所述第二获取模块包括第一获取单元,用于如果在所述主图和所述辅图中的对应像素点具有相同深度值,则赋予所述对应像素点相对参考值较大的深度置信度值;和/或,如果所述主图中像素点的深度值超出预设范围,则赋予超出预设范围的像素点相对参考值较小的深度置信度值;和/或,如果所述主图中像素点具有两个或两个以上的深度值,则赋予具有两个或两个以上的深度值的像素点相对参考值较小的深度置信度值。
- 根据权利要求14或15所述的装置,其中,所述修正模块用于以具有最高深度置信度值的相邻像素点的深度值,替换具有最低深度置信度值的像素点的深度值。
- 根据权利要求14-16中任一项所述的装置,其中,还包括:去噪模块,用于对所述深度数据进行去噪处理。
- 根据权利要求17所述的装置,其中,所述去噪模块包括:滤波单元,用于采用滤波器对深度数据进行滤波处理;和/或,增大单元,用于将所述深度数据中各深度值按照预设比例增大。
- 根据权利要求14-18中任一项所述的装置,其中,所述第二获取模块包括:第二获取单元,用于对所述主图和所述辅图进行立体匹配获得初始深度数据;第三获取单元,用于对所述初始深度数据进行深度校准以使所述主图和所述辅图对应像素点位于同一深度,获得所述深度数据。
- 根据权利要求14-19中任一项所述的装置,其中,所述虚化模块包括:第四获取单元,用于根据修正后的深度数据获取所述主图中各第一像素点的虚化期望数据;虚化单元,用于根据各第一像素点的虚化期望数据对所述主图进行虚化处理。
- 根据权利要求20所述的装置,其中,所述虚化单元包括:生成子单元,用于生成所述主图的第一像素点对应且像素值为初始值的虚化图;确定子单元,用于根据所述主图中各第一像素点的虚化期望数据分别确定所述虚化图中相应的第二像素点的初始虚化权重值;更新子单元,用于针对上述虚化图中的至少一个第二像素点进行至少一次更新,所述更新包括:根据第一像素点的像素值和与所述第一像素点对应的第二像素点的当前虚化权重值,更新所述对应的第二像素点的至少一个邻近的第二像素点的当前像素值和当前虚化权重值;虚化子单元,用于根据更新后的所述虚化图获得所述主图的虚化处理结果。
- 根据权利要求21所述的装置,其中,所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求。
- 根据权利要求22所述的装置,其中,所述第一像素点的虚化期望数据包括:虚化半径;所述邻近的第二像素点与所述对应的第二像素点之间的距离满足设定要求,包括:所述邻近的第二像素点与所述对应的第二像素点之间的距离小于或等于所述虚化半径。
- 根据权利要求21-23中任一项所述的装置,其中,所述虚化子单元用于根据更新后的所述虚化图中各第二像素点的当前像素值和当前虚化权重值,对所述虚化图中的各第二像素点的像素值进行归一化处理,将归一化处理后的所述虚化图作为所述虚化处理结果。
- 根据权利要求20-24中任一项所述的装置,其中,所述第四获取单元包括:第一确定子单元,用于根据所述深度数据确定所述主图中各第一像素点与所述主图中预定对焦点的深度差值;第二确定子单元,用于根据各深度差值分别确定各第一像素点的虚化期望数据。
- 根据权利要求25所述的装置,其中,所述第四获取单元还包括:获取子单元,用于获取输入的对焦点信息。
- 一种存储介质,其中存储有至少一可执行指令,所述可执行指令适于由处理器加载并执行如权利要求1-13中任一项所述的图像虚化处理方法对应的操作。
- 一种电子设备,包括:处理器和存储器;所述存储器用于存放至少一项可执行指令,所述可执行指令使所述处理器执行如权利要求1-13中任一项所述的图像虚化处理方法对应的操作。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现权利要求1-13中任一项所述的图像虚化处理方法的指令。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/457,438 US10970821B2 (en) | 2017-05-19 | 2019-06-28 | Image blurring methods and apparatuses, storage media, and electronic devices |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710359299.3 | 2017-05-19 | ||
CN201710359299.3A CN108234858B (zh) | 2017-05-19 | 2017-05-19 | 图像虚化处理方法、装置、存储介质及电子设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/457,438 Continuation US10970821B2 (en) | 2017-05-19 | 2019-06-28 | Image blurring methods and apparatuses, storage media, and electronic devices |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018210318A1 true WO2018210318A1 (zh) | 2018-11-22 |
Family
ID=62656520
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/087372 WO2018210318A1 (zh) | 2017-05-19 | 2018-05-17 | 图像虚化处理方法、装置、存储介质及电子设备 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10970821B2 (zh) |
CN (1) | CN108234858B (zh) |
WO (1) | WO2018210318A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021114061A1 (en) * | 2019-12-09 | 2021-06-17 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Electric device and method of controlling an electric device |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108335323B (zh) * | 2018-03-20 | 2020-12-29 | 厦门美图之家科技有限公司 | 一种图像背景的虚化方法及移动终端 |
CN109903252B (zh) * | 2019-02-27 | 2021-06-18 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
WO2020192209A1 (zh) * | 2019-03-25 | 2020-10-01 | 华为技术有限公司 | 一种基于Dual Camera+TOF的大光圈虚化方法 |
CN111741283A (zh) * | 2019-03-25 | 2020-10-02 | 华为技术有限公司 | 图像处理的装置和方法 |
CN111242843B (zh) * | 2020-01-17 | 2023-07-18 | 深圳市商汤科技有限公司 | 图像虚化方法、图像虚化装置、设备及存储装置 |
CN113395434B (zh) * | 2020-03-11 | 2022-08-23 | 武汉Tcl集团工业研究院有限公司 | 一种预览图像虚化方法、存储介质及终端设备 |
CN111866369B (zh) * | 2020-05-28 | 2022-08-02 | 北京迈格威科技有限公司 | 图像处理方法及装置 |
CN112532839B (zh) * | 2020-11-25 | 2022-05-27 | 深圳市锐尔觅移动通信有限公司 | 一种摄像头模组、成像方法、成像装置及移动设备 |
CN112532882B (zh) * | 2020-11-26 | 2022-09-16 | 维沃移动通信有限公司 | 图像显示方法和装置 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110069884A1 (en) * | 2009-09-24 | 2011-03-24 | Sony Corporation | System and method for "bokeh-aji" shot detection and region of interest isolation |
US20110280475A1 (en) * | 2010-05-12 | 2011-11-17 | Samsung Electronics Co., Ltd. | Apparatus and method for generating bokeh effect in out-focusing photography |
CN102542541A (zh) * | 2011-12-31 | 2012-07-04 | 浙江大学 | 深度图像后处理的方法 |
CN103814306A (zh) * | 2011-06-24 | 2014-05-21 | 索弗特凯耐提克软件公司 | 深度测量质量增强 |
CN104424640A (zh) * | 2013-09-06 | 2015-03-18 | 格科微电子(上海)有限公司 | 对图像进行虚化处理的方法和装置 |
CN104853080A (zh) * | 2014-02-13 | 2015-08-19 | 宏达国际电子股份有限公司 | 图像处理装置 |
CN105007475A (zh) * | 2014-04-17 | 2015-10-28 | 聚晶半导体股份有限公司 | 产生深度信息的方法与装置 |
CN105139355A (zh) * | 2015-08-18 | 2015-12-09 | 山东中金融仕文化科技股份有限公司 | 一种深度图像的增强方法 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7676146B2 (en) * | 2007-03-09 | 2010-03-09 | Eastman Kodak Company | Camera using multiple lenses and image sensors to provide improved focusing capability |
US9161010B2 (en) * | 2011-12-01 | 2015-10-13 | Sony Corporation | System and method for generating robust depth maps utilizing a multi-resolution procedure |
JP6308748B2 (ja) * | 2013-10-29 | 2018-04-11 | キヤノン株式会社 | 画像処理装置、撮像装置及び画像処理方法 |
AU2013263760A1 (en) * | 2013-11-28 | 2015-06-11 | Canon Kabushiki Kaisha | Method, system and apparatus for determining a depth value of a pixel |
CN103945118B (zh) * | 2014-03-14 | 2017-06-20 | 华为技术有限公司 | 图像虚化方法、装置及电子设备 |
TWI573433B (zh) * | 2014-04-30 | 2017-03-01 | 聚晶半導體股份有限公司 | 優化深度資訊的方法與裝置 |
CN105163042B (zh) * | 2015-08-03 | 2017-11-03 | 努比亚技术有限公司 | 一种虚化处理深度图像的装置和方法 |
US9818232B2 (en) * | 2015-08-26 | 2017-11-14 | Adobe Systems Incorporated | Color-based depth smoothing of scanned 3D model to enhance geometry in 3D printing |
CN105354805B (zh) * | 2015-10-26 | 2020-03-06 | 京东方科技集团股份有限公司 | 深度图像的去噪方法和去噪设备 |
CN106060423B (zh) * | 2016-06-02 | 2017-10-20 | 广东欧珀移动通信有限公司 | 虚化照片生成方法、装置和移动终端 |
US20180091798A1 (en) * | 2016-09-26 | 2018-03-29 | Imec Taiwan Co. | System and Method for Generating a Depth Map Using Differential Patterns |
JP6921500B2 (ja) * | 2016-10-24 | 2021-08-18 | キヤノン株式会社 | 距離検出装置および距離検出方法 |
-
2017
- 2017-05-19 CN CN201710359299.3A patent/CN108234858B/zh active Active
-
2018
- 2018-05-17 WO PCT/CN2018/087372 patent/WO2018210318A1/zh active Application Filing
-
2019
- 2019-06-28 US US16/457,438 patent/US10970821B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110069884A1 (en) * | 2009-09-24 | 2011-03-24 | Sony Corporation | System and method for "bokeh-aji" shot detection and region of interest isolation |
US20110280475A1 (en) * | 2010-05-12 | 2011-11-17 | Samsung Electronics Co., Ltd. | Apparatus and method for generating bokeh effect in out-focusing photography |
CN103814306A (zh) * | 2011-06-24 | 2014-05-21 | 索弗特凯耐提克软件公司 | 深度测量质量增强 |
CN102542541A (zh) * | 2011-12-31 | 2012-07-04 | 浙江大学 | 深度图像后处理的方法 |
CN104424640A (zh) * | 2013-09-06 | 2015-03-18 | 格科微电子(上海)有限公司 | 对图像进行虚化处理的方法和装置 |
CN104853080A (zh) * | 2014-02-13 | 2015-08-19 | 宏达国际电子股份有限公司 | 图像处理装置 |
CN105007475A (zh) * | 2014-04-17 | 2015-10-28 | 聚晶半导体股份有限公司 | 产生深度信息的方法与装置 |
CN105139355A (zh) * | 2015-08-18 | 2015-12-09 | 山东中金融仕文化科技股份有限公司 | 一种深度图像的增强方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021114061A1 (en) * | 2019-12-09 | 2021-06-17 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Electric device and method of controlling an electric device |
Also Published As
Publication number | Publication date |
---|---|
CN108234858B (zh) | 2020-05-01 |
US20190325564A1 (en) | 2019-10-24 |
US10970821B2 (en) | 2021-04-06 |
CN108234858A (zh) | 2018-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018210318A1 (zh) | 图像虚化处理方法、装置、存储介质及电子设备 | |
WO2018210308A1 (zh) | 图像虚化处理方法、装置、存储介质及电子设备 | |
US9900510B1 (en) | Motion blur for light-field images | |
US9444991B2 (en) | Robust layered light-field rendering | |
US9779489B2 (en) | Automatically suggesting regions for blur kernel estimation | |
TWI808987B (zh) | 將相機與陀螺儀融合在一起的五維視頻穩定化裝置及方法 | |
US11132770B2 (en) | Image processing methods and apparatuses, computer readable storage media and electronic devices | |
US11315274B2 (en) | Depth determination for images captured with a moving camera and representing moving features | |
US20170256036A1 (en) | Automatic microlens array artifact correction for light-field images | |
US20160301868A1 (en) | Automated generation of panning shots | |
CN105227857B (zh) | 一种自动曝光的方法和装置 | |
US20230230204A1 (en) | Image processing method and apparatus, and method and apparatus for training image processing model | |
CN108230333B (zh) | 图像处理方法、装置、计算机程序、存储介质和电子设备 | |
CN109120854B (zh) | 图像处理方法、装置、电子设备及存储介质 | |
CN109919971B (zh) | 图像处理方法、装置、电子设备及计算机可读存储介质 | |
CN111968052B (zh) | 图像处理方法、图像处理装置及存储介质 | |
WO2020135577A1 (zh) | 画面生成方法、装置、终端及对应的存储介质 | |
CN109727193B (zh) | 图像虚化方法、装置及电子设备 | |
CN116017129A (zh) | 一种补光灯角度调整方法、装置、系统、设备和介质 | |
US20160323490A1 (en) | Extensible, automatically-selected computational photography scenarios | |
WO2021239029A1 (zh) | 幻影反射补偿方法及设备 | |
CN116416141A (zh) | 图像处理方法及装置、设备、存储介质 | |
CN117135445A (zh) | 图像处理方法及装置 | |
CN115375568A (zh) | 图像处理方法、装置、电子设备和计算机可读存储介质 | |
CN117917685A (zh) | 图像处理方法、装置、电子设备和计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18801830 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.04.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18801830 Country of ref document: EP Kind code of ref document: A1 |