WO2016058288A1 - 一种景深渲染方法和装置 - Google Patents

一种景深渲染方法和装置 Download PDF

Info

Publication number
WO2016058288A1
WO2016058288A1 PCT/CN2015/070919 CN2015070919W WO2016058288A1 WO 2016058288 A1 WO2016058288 A1 WO 2016058288A1 CN 2015070919 W CN2015070919 W CN 2015070919W WO 2016058288 A1 WO2016058288 A1 WO 2016058288A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
diameter
point
pixel point
target
Prior art date
Application number
PCT/CN2015/070919
Other languages
English (en)
French (fr)
Inventor
刘明
方晓鑫
贾霞
盛斌
罗圣美
樊增智
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016058288A1 publication Critical patent/WO2016058288A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present invention relates to the field of image processing, and in particular, to a depth of field rendering method and apparatus.
  • Depth of field is an important feature of human visual system imaging.
  • the human eye changes the focal length of the lens by adjusting the degree of curvature (refraction) of the lens to obtain an image that is focused on a particular plane.
  • the image generated by this method only has a clear object image on the plane of the focus, while other regions appear blurred.
  • the rendering of depth of field effects is very important. It helps users better integrate into the scene and improve their perception of the depth of the scene. In addition, the rendering of the depth of field effect can focus people's attention on the specified object, highlighting the focus area.
  • the earliest study of the depth of field algorithm was Potmesil et al., on the basis of which many other methods were born. In 2008, Barsky divided these algorithms into object space algorithms and image space algorithms. Object space-based algorithms, while rendering renderings are realistic, do not achieve real-time rendering.
  • the image space-based algorithm is also called the post-processing method.
  • the algorithm uses the pinhole camera model to render a clear image of the scene and blurs the image by the depth value of each pixel on the image and the focal length of the lens. Such methods can be based on a single image only, or multiple images can be acquired at different depths of the scene. The method based on single image is adopted by most real-time depth of field rendering methods. use.
  • the diffusion algorithm completes the process of generating the depth of field image by simulating the diffusion of the color information of each pixel in its dispersion circle, and the aggregation algorithm is for each The pixels around the pixel are sampled, and the color information of other pixels is aggregated to simulate the color diffusion process of other pixels.
  • Color leakage refers to the phenomenon that the color information of the focal plane spreads on the final image and affects the unfocused plane, which produces a pattern that is inconsistent with the natural imaging law.
  • embodiments of the present invention are expected to provide a depth of field rendering method and apparatus.
  • An embodiment of the present invention provides a depth of field rendering method, where the method includes:
  • Determining a maximum dispersion circle diameter of the target image determining a sampling domain of each pixel in the target image according to a maximum dispersion circle diameter of the target image; performing, for each pixel point in the target image, a process of determining a sampling domain of the pixel.
  • the weight values of the foreground pixel and the background pixel of the pixel are determined, and the color information of the pixel is determined according to the weight value and the color information of the foreground pixel and the background pixel of the pixel.
  • the maximum dispersion circle diameter of the target image is determined by:
  • the diameter of the circle of the respective pixels in the target image is determined, and the determined maximum value of the diameter of the circle of the respective pixels is determined as the maximum circle diameter of the target image.
  • the diameter of the circle of the pixel p in the target image is determined by:
  • DCoC(p) is the diameter of the dispersion circle of the target pixel p
  • depth(p) is the distance between the pixel point p and the lens
  • fd is the distance between the focal plane and the lens
  • f is the focal point of the lens and the lens The distance between them
  • D is the diameter of the lens.
  • the maximum dispersion circle diameter is in the range of [8, 32] pixels.
  • the determining a sampling domain of each pixel according to a maximum dispersion circle diameter of the target image includes:
  • the sampling domain of each pixel is set to be centered on each pixel, and the maximum dispersion circle diameter of the target image is used as the circular domain of the diameter.
  • the foreground pixel of the pixel is a pixel in the sampling domain of the pixel that is close to the viewpoint relative to the target pixel; the background pixel of the pixel is within the sampling domain of the pixel relative to the target pixel Point away from the pixel of the viewpoint.
  • the weight value B b (p, q) of the background pixel point q of the target pixel point p is determined by:
  • c b is a constant;
  • maxDCoC is the maximum dispersion circle diameter;
  • DCoC(p) is the diameter of the dispersion circle of the target pixel p;
  • ⁇ (p, q) is a sampling function, and its value is:
  • d(p,q) is the distance between the target pixel point p and the background pixel point q of the target pixel point p
  • DCoC(q) is the diameter of the dispersion circle of the background pixel point q
  • the weight value B f (p, m) of the foreground pixel point m of the target pixel point p is determined by:
  • d(p,m) is the distance between the pixel points p and m; ⁇ (m) is taken as one third of the diameter of the circle of the pixel point m, ie, c f is a constant; ⁇ (p,m) is a sampling function, and its value is as follows:
  • DCoC(m) is the diameter of the circle of the foreground pixel point m of the target pixel point p.
  • the color information of the pixel is determined by:
  • C f (p) represents color information of the pixel p
  • n represents any pixel within the sampling domain ⁇ (p) of the pixel p, and the arbitrary pixel includes the foreground pixel of the point p and the point p a background pixel, and a pixel point p itself
  • B(p, n) represents a weight value of point n with respect to point p
  • C i (n) represents color information of point n;
  • the color information of all the foreground pixel points, all the background pixel points, and the target pixel points of the target pixel point in the target pixel point sampling domain is multiplied by the respective weight values, and the calculation result is accumulated, and then the color of the target pixel point is added.
  • the information is added, and then the final calculation result is divided by the sum of the weights of the ownership, and the obtained result is taken as the color information of the target pixel;
  • the method for determining the weight value B(p, p) of the target pixel point p is the same as the method for determining the weight value of the foreground pixel point of the target pixel point p.
  • An embodiment of the present invention provides a depth of field rendering apparatus, where the apparatus includes: a maximum dispersion circle diameter determining module, a sampling domain determining module, and a color information determining module;
  • the maximum dispersion circle diameter determining module configured to determine a maximum dispersion circle diameter of the target image
  • the sampling domain determining module is configured to determine a sampling domain of each pixel according to a maximum dispersion circle diameter of the target image
  • the color information determining module is configured to perform a process of determining, for each pixel point in the target image, a weight value of a foreground pixel point and a background pixel point of the pixel point in a sampling domain of the pixel point, according to the pixel The weight value and color of the foreground and background pixels of the point The information determines color information of the pixel.
  • the maximum dispersion circle diameter determining module is configured to determine a maximum dispersion circle diameter of the target image by:
  • the diameter of the circle of the respective pixels in the target image is determined, and the determined maximum value of the diameter of the circle of the respective pixels is determined as the maximum circle diameter of the target image.
  • the maximum dispersion circle diameter determining module determines the diameter of the dispersion circle of the pixel point p in the target image by:
  • DCoC(p) is the diameter of the dispersion circle of the target pixel p
  • depth(p) is the distance between the pixel point p and the lens
  • fd is the distance between the focal plane and the lens
  • f is the focal point of the lens and the lens The distance between them
  • D is the diameter of the lens.
  • the maximum dispersion circle diameter determining module is configured to select the maximum dispersion circle diameter within a range of [8, 32] pixels.
  • the sampling domain determining module is configured to determine a sampling domain of each pixel according to the following manner:
  • the sampling domain of each pixel is set to be centered on each pixel, and the maximum dispersion circle diameter of the target image is used as the circular domain of the diameter.
  • the foreground pixel of the pixel is a pixel in the sampling domain of the pixel that is close to the viewpoint relative to the target pixel; the background pixel of the pixel is within the sampling domain of the pixel relative to the target pixel Point away from the pixel of the viewpoint.
  • the color information determining module is configured to determine the weight value B b (p, q) of the background pixel point q of the target pixel point p by:
  • c b is a constant;
  • maxDCoC is the maximum dispersion circle diameter;
  • DCoC(p) is the diameter of the dispersion circle of the target pixel p;
  • ⁇ (p, q) is a sampling function, and its value is:
  • d(p,q) is the distance between the target pixel point p and the background pixel point q of the target pixel point p
  • DCoC(q) is the diameter of the dispersion circle of the background pixel point q
  • the color information determining module is further configured to determine a weight value B f (p, m) of the foreground pixel point m of the target pixel point p by:
  • d(p,m) is the distance between the pixel points p and m; ⁇ (m) is taken as one third of the diameter of the circle of the pixel point m, ie, c f is a constant; ⁇ (p, m) is a sampling function, and its value is as follows:
  • DCoC(m) is the diameter of the circle of the foreground pixel point m of the target pixel point p.
  • the color information determining module is configured to determine the color information C f (p) of the pixel by:
  • C f (p) represents color information of the pixel p
  • n represents any pixel within the sampling domain ⁇ (p) of the pixel p, and the arbitrary pixel includes the foreground pixel of the point p and the point p a background pixel, and a pixel point p itself
  • B(p, n) represents a weight value of point n with respect to point p
  • C i (n) represents color information of point n;
  • the color information of all the foreground pixel points, all the background pixel points, and the target pixel points of the target pixel point in the target pixel point sampling domain is multiplied by the respective weight values, and the calculation result is accumulated. Then, the color information of the target pixel is added, and then the final calculation result is divided by the sum of the weights of the ownership, and the obtained result is used as the color information of the target pixel;
  • the weight value B(p, p) of the pixel point p is determined in the same manner as the weight value of the foreground pixel point of the target pixel point p.
  • a method and apparatus for rendering a depth of field determining a maximum dispersion circle diameter of a target image; determining a sampling domain of each pixel in the target image according to the maximum dispersion circle diameter; and each pixel in the target image
  • the point performs a process of determining color information of the pixel point according to color information of the foreground pixel point and the background pixel point of the pixel point in the sampling domain of the pixel point.
  • the sampling domain of the target pixel is determined according to the maximum dispersion circle of the target image, and the color information of the target pixel is determined according to the color information of other pixels in the sampling domain, and all the color of the target pixel can be affected.
  • Other pixels of the information are included in the sampling domain; on the other hand, other pixel points in the target pixel sampling domain are divided into foreground pixels and background pixels of the target pixel, and the pixels in the sampling domain of the pixel are determined.
  • FIG. 1 is a basic flowchart of a depth of field rendering method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a lens imaging principle according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a depth of field rendering apparatus according to an embodiment of the present invention.
  • determining a maximum dispersion circle diameter of the target image determining a sampling domain of each pixel point in the target image according to the maximum dispersion circle diameter; performing the following processing on each pixel point in the target image: determining the pixel The foreground pixel of the pixel within the sampling domain of the point And a weight value of the background pixel, and determining color information of the pixel according to the weight value and the color information of the foreground pixel and the background pixel of the pixel.
  • the embodiment of the invention provides a depth of field rendering method. As shown in FIG. 1 , the method includes the following steps:
  • Step 101 Determine a maximum dispersion circle diameter of the target image; and determine a sampling domain of each pixel point in the target image according to the maximum dispersion circle diameter;
  • the maximum dispersion circle diameter of the target image may be determined according to the following two schemes;
  • the first solution first determining the diameter of the dispersion circle of each pixel in the target image, and setting the maximum value of the diameter of the dispersion circle of each determined pixel point as the maximum dispersion circle diameter of the target image;
  • the second scheme set the maximum dispersion circle diameter within the range of [8,32] pixels, that is, select the appropriate value in [8,32] as the maximum dispersion circle diameter of the target image as needed; preferably, The maximum dispersion circle diameter is set to 16.
  • the diameter of the dispersion circle of each pixel in the target image needs to be determined according to the lens parameter of the target image; specifically, the lens parameters of the target image include: object distance depth (p), focus plane, and lens Distance f d , focal length f and lens diameter D;
  • the method for determining the diameter of the dispersion circle of each pixel is the same, and the method for determining the diameter of the circle of the pixel is described below with the point p as the target pixel;
  • the target pixel point p is a sample pixel in the scene to be rendered, and the light reflected by each pixel in the scene is reflected by the reflected light of the p point, and is projected through the lens after being refracted.
  • the imaging plane In the imaging plane;
  • the object distance depth (p) is the distance between the pixel point p and the lens, and is embodied as a depth value of the point p during the rendering process;
  • the focal plane is the plane in which the portion of the scene in the final image is clear, one on the focal plane
  • the light reflected by the points is refracted by the lens and then focused on the same plane on the imaging plane, thus ensuring the color information consistent with the original scene, that is, showing clear results;
  • the imaging plane is receiving all the light refracted by the lens , the plane that produces the final image.
  • f d is the distance between the focus plane and the lens, which is reflected as the depth value of the focus plane during rendering
  • the focal length f of the lens is the distance between the focal point of the lens and the lens, which is one of the important parameters of the lens, affecting the degree of blurring of the unfocused region;
  • the image distance I is the distance between the imaging plane and the lens
  • the lens diameter D is the diameter of the lens and is one of the important parameters of the lens, which affects the degree of blurring of the unfocused region.
  • the point p is located in a non-focus plane, and the reflected light finally diffuses out of a circular area on the focal plane.
  • the circle of confusion refers to the circular area, and DCoC is the diameter value of the circular area, that is, the point p. Diffusion circle diameter;
  • the diameter of the circle of the point p, DCoC(p), can be calculated by:
  • the maximum value of all the diameters of the dispersion circle is taken as the maximum diameter of the dispersion circle of the target image.
  • each pixel Since the final color information of each pixel is jointly determined by other pixels in its sampling domain, in this step, after determining the maximum dispersion circle diameter of the target image according to the first or second scheme described above, The sampling domain of each pixel is determined according to the determined maximum dispersion circle diameter.
  • a sampling field of a certain target pixel is set to a circular domain centered on the target pixel, and all the pixel points used in the calculation process of the color information of each pixel point constitute a sampling domain. Setting the diameter value of the pixel sampling domain to the maximum dispersion circle diameter of the target image determined above, so that all possible influences on the target pixel point can be ensured.
  • the other pixels of the color information are in the sampling domain of the target pixel;
  • the determining a sampling domain of each pixel point according to a maximum dispersion circle diameter of the target image includes:
  • the sampling domain of each pixel is set to be centered on each pixel, and the maximum dispersion circle diameter of the target image is used as the circular domain of the diameter.
  • Step 102 Perform, on each pixel point in the target image, a process of determining a weight value of a foreground pixel point and a background pixel point of the pixel point in a sampling domain of the pixel point, according to a foreground pixel point of the pixel point The weight value and color information of the background pixel determine color information of the pixel;
  • step 101 After determining the sampling domain of each pixel in step 101, the same processing is performed for each pixel in the target image;
  • the background pixel point of the target pixel point and the foreground pixel point of the target pixel point may be divided; wherein the background pixel point of the target pixel point is relative to the target pixel Pointing away from the pixel of the viewpoint, the foreground pixel is a pixel point close to the viewpoint relative to the target pixel;
  • the calculation method of the weight value of the foreground pixel of the target pixel is very different from the calculation method of the weight of the background pixel of the target pixel, which is mainly caused by the full occlusion and non-object of the object on the focus plane during the imaging process of the visual system.
  • the partial occlusion of the object on the focus plane is determined.
  • the calculation method of the weight value of the background pixel point q relative to the target pixel point p is described by taking the background pixel point q and the target pixel point p as an example. Specifically, the weight of the background pixel point q relative to the target pixel point p is specifically described.
  • the value B b (p,q) is determined by:
  • the c b is a constant determined according to the size of the sampling domain, and can be adjusted according to the degree of image blurring to be obtained;
  • maxDCoC is the maximum diameter of the dispersion circle;
  • DCoC(p) is the diameter of the dispersion circle of the target pixel p;
  • (p, q) is a sampling function, and the sampling function ⁇ (p, q) determines the degree of influence of the background pixel p on the color information of the target pixel q, and the value is:
  • d(p,q) is the distance between the target pixel point p and the background pixel point q of the target pixel point p, and when the background pixel point is on the focus plane, the dispersion circle diameter value DCoC(q) is 0.
  • the sampling function has a value of 0, thereby preventing color leakage of pixels on the focus plane to the foreground pixels.
  • the weight value of the background pixel is affected by the diameter of the circle of the target pixel and is proportional to the diameter of the circle of the target pixel; therefore, when the target pixel is located in the focus plane, The diameter of the circle of the pixel is zero, and the weight of the background pixel of the target pixel is zero. Therefore, the focus plane is not affected by the background pixel, thus preserving the original clear scene. Meanwhile, the weight value of the background pixel is affected by the distance between the background pixel and the target pixel, and the magnitude of the influence is determined by the sampling function ⁇ (p, q), when the target pixel and the target pixel are in the background.
  • ⁇ (p,q) 1 when the distance between the pixels is smaller than the diameter of the circle of the background pixel; otherwise, the value of ⁇ (p,q) is 0; this ensures that when the background pixel is in the focus plane Since the diameter of the self-diffusion circle is 0, the background pixel weight must be 0 for any target pixel, which can effectively prevent the color of the focus plane from being leaked to the unfocused plane.
  • the weight value B f (p, m) of the foreground pixel point m of the target pixel point p with respect to the target pixel point p is determined by the equation (4):
  • the equation (4) is used as a main calculation basis for the foreground pixel point m weight value B f (p, m) by a Gaussian function, which makes the foreground pixel point m weight relative to other pixels in its dispersion circle.
  • the value decreases from the center of the circle to the edge of the circle.
  • the decreasing rate is affected by the diameter of the circle of the foreground pixel. The larger the diameter of the circle of the foreground pixel, the slower the deceleration rate.
  • d(p m) is the distance between the pixel points p and m; ⁇ (m) is taken as one third of the diameter of the circle of the pixel point m, ie,
  • the diameter of the dispersion circle is 0, so the value of ⁇ (m) is also 0, and the deceleration rate of the weight value tends to infinity, which is equivalent to the weight value of other pixels being decremented to 0. Therefore, the foreground pixels located in the focus plane do not have the ability to affect other pixels, ensuring that the focus plane maintains the original clear effect.
  • the distribution of weight values is slow, so as long as the target pixel is in its dispersion circle, even if the target pixel is in the focus plane, it will be affected by the foreground pixel, thus ensuring the color of the foreground pixel.
  • Information can be diffused onto the focal plane;
  • c f is a constant determined according to the size of the sampling domain, and can be adjusted according to the degree of image blurring to be obtained;
  • ⁇ (p, m) is a sampling function, and its value is as shown in equation (5):
  • the value of the sampling function ⁇ (p,m) also affects the weight value of the foreground pixel.
  • the value of ⁇ (p,m) is 1. Otherwise, the value of ⁇ (p,m) is 0. This ensures that the foreground pixels will only affect other pixels that are within their dispersion circle.
  • determining color information of the pixel point according to the color information of the foreground pixel and the background pixel of the pixel in the sampling field of the pixel specifically including:
  • the method for determining the color information of the target pixel point p will be described in detail below by taking the target pixel point p as an example.
  • the color information C f (p) of the target pixel point p is determined by:
  • n any pixel in the sampling domain ⁇ (p) of point p (including the foreground pixel of point p, the background pixel of point p, and point p itself); B(p,n) represents point n relative to The weight value of the point p; C i (n) represents the color information of the point n; wherein the weight value of the target pixel point p is determined in the same manner as the weight value of the foreground pixel point of the target pixel point p, that is, the target pixel
  • the weight value B(p, p) of point p is determined by:
  • d(p,p) is the distance between the pixel points p and p, and takes a value of 0;
  • ⁇ (p) takes a value of one third of the diameter of the circle of the pixel p, ie, c f is a constant;
  • ⁇ (p,p) is a sampling function whose value is as follows:
  • DCoC(p) is the diameter of the dispersion circle of the target pixel point p.
  • the solution provided by the embodiment of the present invention can distinguish the pixel points in the target pixel point sampling domain from the foreground pixel point and the background pixel point, and can restore the feature that the background area is occluded and the foreground area can be diffused during the natural depth of field rendering image generation process.
  • the diffusion of the background pixel to the focal plane is cut off according to the characteristic of the focal circle of the focal plane, and the focus plane is also prevented.
  • Pixels are aggregated by other pixels; at the same time, the excessively sharp artifacts at the intersection of the blurred foreground and background are also solved, which mainly utilizes the smooth attenuation caused by the Gaussian function as the weight value calculation function. The effect of the edges.
  • the embodiment of the present invention provides a depth of field rendering device.
  • the device includes: a maximum dispersion circle diameter determining module 31, a sampling domain determining module 32, and a color information determining module 33;
  • the maximum dispersion circle diameter determining module 31 is configured to determine a maximum dispersion circle diameter of the target image
  • the sampling domain determining module 32 is configured to determine a sampling domain of each pixel according to a maximum aperture circle diameter of the target image
  • the color information determining module 33 is configured to perform, for each pixel point in the target image, a process of determining a weight value of a foreground pixel point and a background pixel point of the pixel point in a sampling domain of the pixel point, according to the The weight value and color information of the foreground pixel and the background pixel of the pixel determine the color information of the pixel.
  • the maximum dispersion circle diameter determining module 31 is configured to determine a maximum dispersion circle diameter of the target image by:
  • the diameter of the circle of the respective pixels in the target image is determined, and the determined maximum value of the diameter of the circle of the respective pixels is determined as the maximum circle diameter of the target image.
  • the maximum dispersion circle diameter determining module 31 further determines the diameter of the dispersion circle of the pixel point p in the target image by:
  • DCoC(p) is the diameter of the circle of the pixel p; depth(p) is the distance between the pixel p and the lens; fd is the distance between the focal plane and the lens; f is the focal point of the lens and the lens The distance; D is the diameter of the lens.
  • the maximum dispersion circle diameter determining module 31 is further configured to select the maximum dispersion circle diameter within a range of [8, 32] pixels; wherein the maximum dispersion circle diameter can be set to [8, according to actual needs. 32] Any value within the pixel range; preferably, the maximum dispersion circle diameter can be set to 16.
  • sampling domain determining module 32 is configured to determine a sampling domain of each pixel according to the following manner:
  • the sampling domain of each pixel is set to be centered on each pixel, and the maximum dispersion circle diameter of the target image is used as the circular domain of the diameter.
  • the foreground pixel of the pixel is a pixel in the sampling domain of the pixel that is close to the viewpoint relative to the target pixel; the background pixel of the pixel is within the sampling domain of the pixel relative to the target pixel A pixel away from the viewpoint.
  • the color information determining module 33 is specifically configured to determine the weight value B b (p, q) of the background pixel point q of the target pixel point p by:
  • c b is a constant;
  • maxDCoC is the maximum dispersion circle diameter;
  • DCoC(p) is the diameter of the dispersion circle of the target pixel p;
  • ⁇ (p, q) is a sampling function, and its value is:
  • d(p,q) is the distance between the target pixel point p and the background pixel point q of the target pixel point p
  • DCoC(q) is the diameter of the dispersion circle of the background pixel point q
  • the color information determining module 33 is further configured to determine the weight value B f (p, m) of the foreground pixel point m of the target pixel point p by:
  • d(p,m) is the distance between the pixel points p and m; ⁇ (m) is taken as one third of the diameter of the circle of the pixel point m, ie, c f is a constant; ⁇ (p, m) is a sampling function, and its value is as follows:
  • DCoC(m) is the diameter of the circle of the foreground pixel point m of the target pixel point p.
  • the color information determining module 33 is configured to determine color information of the pixel point by:
  • C f (p) represents the color information of the pixel p
  • n represents any pixel within the sampling domain ⁇ (p) of the pixel p (including the foreground pixel of the point p, the background pixel of the point p, and the point p)
  • B(p,n) represents the weight value of point n with respect to point p
  • C i (n) represents the color information of point n; wherein the weight value of the target pixel point p is determined in a manner corresponding to the target pixel point p
  • the weight value of the foreground pixel is determined in the same manner, that is, the weight value B(p, p) of the target pixel point p is determined by:
  • d(p,p) is the distance between the pixel points p and p, and takes a value of 0;
  • ⁇ (p) takes a value of one third of the diameter of the circle of the pixel p, ie, c f is a constant;
  • ⁇ (p,p) is a sampling function whose value is as follows:
  • DCoC(p) is the diameter of the dispersion circle of the target pixel point p.
  • the color information of all the foreground pixel points, all the background pixel points, and the target pixel points of the target pixel point in the target pixel point sampling domain is multiplied by the respective weight values, and the calculation result is accumulated, and then the color of the target pixel point is added. After the information is added, the final calculation result is divided by the sum of the ownership weights, and the obtained result is taken as the color information of the target pixel.
  • the maximum dispersion circle diameter determining module 31, the sampling domain determining module 32, and the color information determining module 33 may be a central processing unit (CPU, Central Processing Unit) and a microprocessor (MPU, in the image processing apparatus. Micro Processing Unit), Digital Signal Processor (DSP) or Field-Programmable Gate Array (FPGA).
  • CPU Central Processing Unit
  • MPU microprocessor
  • Micro Processing Unit Digital Signal Processor
  • FPGA Field-Programmable Gate Array
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • These computer program instructions can also be stored in a bootable computer or other programmable data processing
  • the apparatus is readable in a computer readable memory in a particular manner such that instructions stored in the computer readable memory produce an article of manufacture comprising instruction means implemented in one or more flows and/or block diagrams of the flowchart The function specified in the box or in multiple boxes.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

公开了一种景深渲染方法,确定目标像素的最大弥散圈直径;根据所述最大弥散圈直径确定目标图像中各个像素点的采样域;对目标图像中的每一个像素点执行以下处理:确定所述像素点的采样域内所述像素点的前景像素点和背景像素点的权重值,根据所述像素点的前景像素和背景像素的权重值和颜色信息确定所述像素点的颜色信息。同时还公开了一种景深渲染装置。

Description

一种景深渲染方法和装置 技术领域
本发明涉及图像处理领域,具体涉及一种景深渲染方法和装置。
背景技术
随着计算机图形渲染技术的不断进步,不论在个人计算机平台还是智能移动终端,人们对于应用软件,尤其是游戏领域的应用上的场景渲染的逼真性要求越来越高。景深由于其本身渲染的复杂性较大,效率较低,作为一个目前尚未在图形应用中被普遍实现的效果,是未来一大技术研究热点。
景深是人体视觉系统成像所具有的重要特征。人的眼睛通过调节晶状体的弯曲程度(屈光)来改变晶状体焦距,获取聚焦于特定平面的图像。通过此法生成的图像只聚焦平面区域上有清晰的物象,而其他区域则显得模糊。在动画游戏、虚拟现实以及其他应用当中,景深效果的渲染显得十分重要。它能帮助使用者更好地融入到场景当中并且提高他们对于场景深度的感知。此外,景深效果的渲染能够将人们的注意力集中到指定的物体上,突出聚焦区域。
最早研究景深算法的是Potmesil等人,在此基础上诞生了许多其他方法。2008年Barsky将这些算法划分为物体空间算法和图像空间算法。基于物体空间的算法虽然渲染效果逼真,但无法达到实时渲染。基于图像空间的算法也称后期处理方法,其算法采用针孔相机模型渲染出场景的清晰图像并通过该图像上各个像素点的深度值以及透镜焦距等信息对图像进行模糊处理。这类方法可以仅基于单张图像,也可以在场景不同深度处采集多张图像来处理。而基于单张图像的方法为大部分实时景深渲染方法所采 用。在对图像的处理过程中分为扩散和聚合两大类,扩散算法通过模拟每个像素点的颜色信息在其弥散圈内的扩散来完成景深图像的生成过程,而聚合算法则是对每个像素点的周围的像素点进行采样,通过聚合其他像素的颜色信息来完成对其他像素颜色扩散过程的模拟。
现有的基于图像空间的方法普遍存在的问题就是人工痕迹的出现,其中最为典型的就是颜色泄露。颜色泄露是指在最终图像上聚焦平面的颜色信息扩散并影响到了非聚焦平面的区域,所产生的与自然界成像规律不符的现象。
发明内容
为了解决现有存在的技术问题,本发明实施例期望提供一种景深渲染方法和装置。
本发明实施例提供了一种景深渲染方法,所述方法包括:
确定目标图像的最大弥散圈直径;根据目标图像的最大弥散圈直径确定目标图像中各个像素点的采样域;对目标图像中的每一个像素点执行以下处理:确定所述像素点的采样域内所述像素点的前景像素点和背景像素点的权重值,根据所述像素点的前景像素点和背景像素点的权重值和颜色信息确定所述像素点的颜色信息。
上述方案中,通过以下方式确定目标图像的最大弥散圈直径:
确定目标图像中各个像素点的弥散圈直径,将所确定的各个像素点的弥散圈直径的最大值确定为目标图像的最大弥散圈直径。
上述方案中,通过以下方式确定目标图像中像素点p的弥散圈直径:
Figure PCTCN2015070919-appb-000001
其中,DCoC(p)为目标像素点p的弥散圈直径值,depth(p)为像素点p和透镜之间的距离;fd为聚焦平面和透镜之间的距离;f为透镜焦点和透 镜之间的距离;D为透镜的直径大小。
上述方案中,所述最大弥散圈直径在[8,32]像素范围内取值。
上述方案中,所述根据目标图像的最大弥散圈直径确定各个像素的采样域,包括:
将各个像素点的采样域设置为以各个像素点为圆心,并以目标图像的最大弥散圈直径作为直径的圆形域。
上述方案中,所述像素点的前景像素点为所述像素点的采样域内相对于目标像素点靠近视点的像素点;所述像素点的背景像素为所述像素点的采样域内相对于目标像素点远离视点的像素点。
上述方案中,通过以下方式确定目标像素点p的背景像素点q的权重值Bb(p,q):
Figure PCTCN2015070919-appb-000002
其中,cb为常量;maxDCoC为最大弥散圈直径;DCoC(p)为目标像素点p的弥散圈直径值;δ(p,q)为采样函数,其取值为:
Figure PCTCN2015070919-appb-000003
其中,d(p,q)为目标像素点p和目标像素点p的背景像素点q之间的距离,DCoC(q)为所述背景像素点q的弥散圈直径;
通过以下方式确定目标像素点p的前景像素点m的权重值Bf(p,m):
Figure PCTCN2015070919-appb-000004
其中,d(p,m)为像素点p和m之间的距离;σ(m)的取值为像素点m的弥散圈直径的三分之一,即,
Figure PCTCN2015070919-appb-000005
cf为常量;δ(p,m)为采样函数,其取值如下式所示:
Figure PCTCN2015070919-appb-000006
其中,DCoC(m)为目标像素点p的前景像素点m的弥散圈直径。
上述方案中,通过以下方式确定所述像素点的颜色信息:
Figure PCTCN2015070919-appb-000007
其中,Cf(p)代表像素点p的颜色信息;n代表像素点p的采样域Ω(p)内任意一个像素点,所述任意一个像素点包括点p的前景像素点和点p的背景像素点,以及像素点p本身;B(p,n)表示点n相对于点p的权重值;Ci(n)代表点n的颜色信息;
即,将目标像素点采样域内所述目标像素点的所有前景像素点、所有背景像素点及目标像素点的颜色信息乘以各自的权重值,将计算结果累加之后,再与目标像素点的颜色信息相加,之后,将最终计算结果除以所有权重值总和,将得到的结果作为目标像素点的颜色信息;
其中,目标像素点p的权重值B(p,p)的确定方法与目标像素点p的前景像素点的权重值确定方法相同。
本发明实施例提供了一种景深渲染装置,所述装置包括:最大弥散圈直径确定模块、采样域确定模块及颜色信息确定模块;其中,
所述最大弥散圈直径确定模块,配置为确定目标图像的最大弥散圈直径;
所述采样域确定模块,配置为根据目标图像的最大弥散圈直径确定各个像素点的采样域;
所述颜色信息确定模块,配置为对目标图像中的每一个像素点执行以下处理:确定所述像素点的采样域内所述像素点的前景像素点和背景像素点的权重值,根据所述像素点的前景像素点和背景像素点的权重值和颜色 信息确定所述像素点的颜色信息。
上述方案中,所述最大弥散圈直径确定模块,配置为通过以下方式确定目标图像的最大弥散圈直径:
确定目标图像中各个像素点的弥散圈直径,将所确定的各个像素点的弥散圈直径的最大值确定为目标图像的最大弥散圈直径。
上述方案中,所述最大弥散圈直径确定模块通过以下方式确定目标图像中像素点p的弥散圈直径:
Figure PCTCN2015070919-appb-000008
其中,DCoC(p)为目标像素点p的弥散圈直径值,depth(p)为像素点p和透镜之间的距离;fd为聚焦平面和透镜之间的距离;f为透镜焦点和透镜之间的距离;D为透镜的直径大小。
上述方案中,所述最大弥散圈直径确定模块,配置为在[8,32]像素范围内选定所述最大弥散圈直径。
上述方案中,所述采样域确定模块,配置为根据以下方式确定各个像素点的采样域:
将各个像素点的采样域设置为以各个像素点为圆心,并以目标图像的最大弥散圈直径作为直径的圆形域。
上述方案中,所述像素点的前景像素点为所述像素点的采样域内相对于目标像素点靠近视点的像素点;所述像素点的背景像素为所述像素点的采样域内相对于目标像素点远离视点的像素点。
上述方案中,所述颜色信息确定模块,配置为通过以下方式确定目标像素点p的背景像素点q的权重值Bb(p,q):
Figure PCTCN2015070919-appb-000009
其中,cb为常量;maxDCoC为最大弥散圈直径;DCoC(p)为目标像素 点p的弥散圈直径值;δ(p,q)为采样函数,其取值为:
Figure PCTCN2015070919-appb-000010
其中,d(p,q)为目标像素点p和目标像素点p的背景像素点q之间的距离,DCoC(q)为所述背景像素点q的弥散圈直径;
所述颜色信息确定模块,还配置为通过以下方式确定目标像素点p的前景像素点m的权重值Bf(p,m):
Figure PCTCN2015070919-appb-000011
其中,d(p,m)为像素点p和m之间的距离;σ(m)的取值为像素点m的弥散圈直径的三分之一,即,
Figure PCTCN2015070919-appb-000012
cf为常量;δ(p,m)为采样函数,其取值下式所示:
Figure PCTCN2015070919-appb-000013
其中,DCoC(m)为目标像素点p的前景像素点m的弥散圈直径。
上述方案中,所述颜色信息确定模块,配置为通过以下方式确定所述像素点的颜色信息Cf(p):
Figure PCTCN2015070919-appb-000014
其中,Cf(p)代表像素点p的颜色信息;n代表像素点p的采样域Ω(p)内任意一个像素点,所述任意一个像素点包括点p的前景像素点和点p的背景像素点,以及像素点p本身;B(p,n)表示点n相对于点p的权重值;Ci(n)代表点n的颜色信息;
即,将目标像素点采样域内所述目标像素点的所有前景像素点、所有背景像素点及目标像素点的颜色信息乘以各自的权重值,将计算结果累加 之后,再与目标像素点的颜色信息相加,之后,将最终计算结果除以所有权重值总和,将得到的结果作为目标像素点的颜色信息;
其中,像素点p的权重值B(p,p)的确定方式与目标像素点p的前景像素点的权重值确定方式相同。
本发明实施例所提供的一种景深渲染方法和装置,确定目标图像的最大弥散圈直径;根据所述最大弥散圈直径确定目标图像中各个像素点的采样域;对目标图像中的每一个像素点执行以下处理:根据所述像素点的采样域内所述像素点的前景像素点和背景像素点的颜色信息确定所述像素点的颜色信息。如此,一方面,根据目标图像的最大弥散圈确定目标像素点的采样域,并根据该采样域内的其它像素点确的颜色信息确定目标像素点的颜色信息,能够将所有影响该目标像素点颜色信息的其它像素点包含在采样域内;另一方面,将目标像素点采样域内的其它像素点划分为该目标像素点的前景像素点和背景像素点,确定所述像素点的采样域内所述像素点的前景像素点和背景像素点的权重值,再根据所述前景像素点和背景像素点的权重值和颜色信息确定目标像素点的颜色信息,能够有效抑制景深渲染过程中由于颜色泄露而产生的人工痕迹问题。
附图说明
图1为本发明实施例提供的景深渲染方法基本流程图;
图2为本发明实施例提供的透镜成像原理示意图;
图3为本发明实施例提供的景深渲染装置的基本结构图。
具体实施方式
本发明实施例中,确定目标图像的最大弥散圈直径;根据所述最大弥散圈直径确定目标图像中各个像素点的采样域;对目标图像中的每一个像素点执行以下处理:确定所述像素点的采样域内所述像素点的前景像素点 和背景像素点的权重值,根据所述像素点的前景像素点和背景像素点的权重值和颜色信息确定所述像素点的颜色信息。
下面通过附图及具体实施例对本发明做进一步的详细说明。
本发明实施例提供了一种景深渲染方法,如图1所示,该方法包括以下步骤:
步骤101:确定目标图像的最大弥散圈直径;根据所述最大弥散圈直径确定目标图像中各个像素点的采样域;
具体的,可以根据以下两种方案确定目标图像的最大弥散圈直径;
第一种方案:首先确定目标图像中各个像素点的弥散圈直径,将所确定的各个像素点的弥散圈直径的最大值设置为目标图像的最大弥散圈直径;
第二种方案:将最大弥散圈直径设置在[8,32]像素范围内,即,根据需要在[8,32]内选取合适的值作为目标图像的最大弥散圈直径;优选的,可以将最大弥散圈直径设置为16。
上述第一种方案中,需要根据目标图像的透镜参数确定目标图像中各个像素点的弥散圈直径;具体的,所述目标图像的透镜参数包括:物距depth(p)、聚焦平面和透镜之间的距离fd、焦距f及透镜直径D;
各个像素点的弥散圈直径的确定方法相同,下面以点p作为目标像素点,对像素点的弥散圈直径确定方法进行介绍;
如图2所示,目标像素点p为所要渲染的场景中的样例像素点,场景中的每个像素点反射的光线均会如p点的反射的光线所示,经过透镜的折射后投射在成像平面;
其中,物距depth(p)为像素点p和透镜之间的距离,在渲染过程中体现为点p的深度值;
聚焦平面为最终成像中清晰的场景部分所在的平面,聚焦平面上的一 个点所反射的光线经过透镜的折射后会在成像平面上聚焦在同一点上,从而保证和原始场景一致的颜色信息,即呈现出清晰的结果;成像平面是接收经过透镜折射后的所有光线,生成最终图像的平面。
fd为聚焦平面和透镜之间的距离,在渲染过程中体现为聚焦平面的深度值;
透镜的焦距f为透镜焦点和透镜之间的距离,是透镜的重要参数之一,影响到非聚焦区域的模糊程度;
像距I为成像平面和透镜之间的距离;
透镜直径D为透镜的直径大小,是透镜的重要参数之一,影响到非聚焦区域的模糊程度。
图2中,点p位于非聚焦平面,其反射的光线最终在聚焦平面上扩散出一个圆形区域,弥散圈即指该圆形区域,DCoC为该圆形区域的直径值,即为点p的弥散圈直径;
具体的,点p的弥散圈直径DCoC(p)可以通过下式计算:
Figure PCTCN2015070919-appb-000015
通过上述方法确定各个像素点的弥散圈直径之后,取所有弥散圈直径的最大值作为目标图像的最大弥散圈直径。
由于每一个像素点的最终颜色信息由其采样域内的其他像素点共同确定,因此,在这一步骤中,在根据上述第一或第二种方案确定目标图像的最大弥散圈直径之后,还需要根据所确定的最大弥散圈直径确定各个像素点的采样域。
每个像素点颜色信息计算过程中所用到的所有像素点构成一个采样域,在本发明实施例中,某一个目标像素的采样域被设置为以该目标像素为圆心的一个圆形域,并将该像素点采样域的直径值设置为上述确定的目标图像的最大弥散圈直径,这样,可以保证所有可能影响到目标像素点颜 色信息的其他像素点都在目标像素点的采样域内;
因此,所述根据目标图像的最大弥散圈直径确定各个像素点的采样域,包括:
将各个像素点的采样域设置为以各个像素点为圆心,并以目标图像的最大弥散圈直径作为直径的圆形域。
步骤102:对目标图像中的每一个像素点执行以下处理:确定所述像素点的采样域内所述像素点的前景像素点和背景像素点的权重值,根据所述像素点的前景像素点和背景像素点的权重值和颜色信息确定所述像素点的颜色信息;
步骤101中确定了各个像素点的采样域之后,对于目标图像中的每一个像素点执行相同的处理;
对于采样域内的像素点,根据它们与目标像素点的相对位置可以被划分为目标像素点的背景像素点和目标像素点的前景像素点;其中,目标像素点的背景像素点为相对于目标像素点远离视点的像素点,前景像素点为相对于目标像素点靠近视点的像素点;
在根据所述像素点的采样域内所述像素点的前景像素点和背景像素点的颜色信息确定所述像素点的颜色信息之前,首先,需要确定目标像素点的各个前景像素点的权重值及目标像素点的各个背景像素点的权重值;所述目标像素点的各个前景像素点或目标像素点的背景像素点的权重值,表示所述前景或背景像素点用于确定目标像素点颜色信息时的权重;
目标像素点的前景像素点的权重值计算方法和目标像素点的背景像素点的权重值计算方法有很大不同,这主要是由视觉系统成像过程中聚焦平面上的物体的全遮挡性和非聚焦平面上的物体的部分遮挡性决定的。
下面以背景像素点q和目标像素点p为例,对该背景像素点q相对于目标像素点p的权重值的计算方法进行介绍,具体的,背景像素点q相对 于目标像素点p的权重值Bb(p,q)通过下式确定:
Figure PCTCN2015070919-appb-000016
其中,所述cb为依据采样域大小所确定的常量,可以根据所要获得的图像模糊程度进行调整;maxDCoC为最大弥散圈直径;DCoC(p)为目标像素点p的弥散圈直径值;δ(p,q)为采样函数,采样函数δ(p,q)决定了背景像素p对目标像素q的颜色信息的影响程度,其取值为:
Figure PCTCN2015070919-appb-000017
其中,d(p,q)为目标像素点p和目标像素点p的背景像素点q之间的距离,当背景像素点处在聚焦平面上时,其弥散圈直径值DCoC(q)为0,采样函数值为0,从而防止了聚焦平面上的像素向前景像素的颜色泄露。
从(2)式可以看出,背景像素点的权重值受目标像素点的弥散圈直径影响,并与目标像素点的弥散圈直径呈正比;因此,当目标像素点位于聚焦平面时,由于目标像素点的弥散圈直径为零,目标像素点的背景像素点的权重值随之为零,因此,聚焦平面不受背景像素点影响,从而保留了原始的清晰场景。同时,背景像素点的权重值受背景像素点和目标像素点之间的距离远近的影响,该影响的大小由采样函数δ(p,q)决定,当目标像素点和该目标像素点的背景像素点之间的距离小于背景像素点的弥散圈直径值时,δ(p,q)值为1;否则,δ(p,q)值为0;这保证了当背景像素点位于聚焦平面时,由于其自身弥散圈直径为0的关系,对任何目标像素点而言,该背景像素权重必为0,能够有效防止聚焦平面向非聚焦平面的颜色泄露。
由于前景像素点具备扩散到任何处于该前景像素点的弥散圈内的目标像素的能力,即它能影响到位于聚焦平面上的像素点的颜色信息,因此前景像素点的权重计算函数和背景像素点有着本质区别。目标像素点p的前景像素点m相对于目标像素点p的权重值Bf(p,m)通过式(4)确定:
Figure PCTCN2015070919-appb-000018
所述(4)式通过一个高斯函数来作为前景像素点m权重值Bf(p,m)的主要计算依据,该高斯函数使得前景像素点m相对于其弥散圈内的其他像素点的权重值由弥散圈中心往弥散圈边缘呈递减的趋势,递减的速率受前景像素点的弥散圈直径值的影响,前景像素点的弥散圈直径值越大,递减速率越慢;其中,d(p,m)为像素点p和m之间的距离;σ(m)的取值为像素点m的弥散圈直径的三分之一,即,
Figure PCTCN2015070919-appb-000019
当前景像素点位于聚焦平面时,其弥散圈直径为0,因此δ(m)值也为0,权重值的递减速率趋于无穷大,等价于它对其他像素的权重值均递减到0,所以位于聚焦平面的前景像素不具备影响其他像素的能力,保证了聚焦平面维持原始的清晰效果。对于远离聚焦平面的前景像素而言,其权重值分布递减缓慢,因此只要目标像素位于其弥散圈内,即使目标像素位于聚焦平面,也会受到该前景像素的影响,从而保证了前景像素的颜色信息能够扩散到聚焦平面上;
(4)式中,cf为依据采样域大小所确定的常量,可以根据所要获得的图像模糊程度进行调整;δ(p,m)为采样函数,其取值如式(5)所示:
Figure PCTCN2015070919-appb-000020
采样函数δ(p,m)的取值同样对前景像素的权重值产生影响,当目标像素点和前景像素点的距离小于前景像素的弥散圈直径值时,δ(p,m)值为1,否则δ(p,m)值为0。这保证了前景像素只会影响到位于其弥散圈内的其他像素。
该步骤中,根据所述像素点的采样域内所述像素点的前景像素点和背景像素点的颜色信息确定所述像素点的颜色信息,具体包括:
将目标像素点采样域内所述目标像素点的所有前景像素点和所有背景 像素点的颜色信息乘以各自的权重值,将计算结果累加之后,再与目标像素点的颜色信息相加,之后,将最终计算结果除以所有权重值总和,将得到的结果作为目标像素点的颜色信息;其中,颜色信息是指像素点的RGB值;
下面仍然以目标像素点p为例,对目标像素点p的颜色信息的确定方法进行详细介绍;
具体的,目标像素点p的颜色信息Cf(p)通过下式确定:
Figure PCTCN2015070919-appb-000021
其中,n代表点p的采样域Ω(p)内任意一个像素点(包括点p的前景像素点、点p的背景像素点及点p本身);B(p,n)表示点n相对于点p的权重值;Ci(n)代表点n的颜色信息;其中,目标像素点p的权重值的确定方式与目标像素点p的前景像素点的权重值确定方式相同,即:目标像素点p的权重值B(p,p)通过以下方式确定:
Figure PCTCN2015070919-appb-000022
其中,d(p,p)为像素点p和p之间的距离,取值为0;σ(p)的取值为像素点p的弥散圈直径的三分之一,即,
Figure PCTCN2015070919-appb-000023
cf为常量;δ(p,p)为采样函数,其取值如下式所示:
Figure PCTCN2015070919-appb-000024
其中,DCoC(p)为目标像素点p的弥散圈直径。
当确定目标图像中的每一个像素点的颜色信息之后,依据所述确定的颜色信息对各个像素点的颜色进行设置,这样,就生成了景深渲染后的图像;
本发明实施例提供的方案,通过将目标像素点采样域内的像素点区分为前景像素点和背景像素点,能够还原自然界景深渲染图像生成过程中背景区域被遮挡、前景区域能扩散的特性。利用不同像素点的弥散圈直径值作为该像素点权重计算过程中的一个重要依据,根据聚焦平面弥散圈直径为0的特性截断了背景像素点往聚焦平面的扩散,也防止了聚焦平面上的像素点被其他像素点所聚合;同时,模糊后的前景和背景的交界处过分锐利的人工痕迹也得到了解决,这主要利用了高斯函数作为权重值计算函数带来的平滑衰减而产生柔化边缘的效果。
本发明实施例提供了一种景深渲染装置,如图3所示,所述装置包括:最大弥散圈直径确定模块31、采样域确定模块32及颜色信息确定模块33;其中,
所述最大弥散圈直径确定模块31,配置为确定目标图像的最大弥散圈直径;
所述采样域确定模块32,配置为根据目标图像的最大弥散圈直径确定各个像素点的采样域;
所述颜色信息确定模块33,配置为对目标图像中的每一个像素点执行以下处理:确定所述像素点的采样域内所述像素点的前景像素点和背景像素点的的权重值,根据所述像素点的前景像素点和背景像素点的权重值和颜色信息确定所述像素点的颜色信息。
具体的,所述最大弥散圈直径确定模块31,配置为通过以下方式确定目标图像的最大弥散圈直径:
确定目标图像中各个像素点的弥散圈直径,将所确定的各个像素点的弥散圈直径的最大值确定为目标图像的最大弥散圈直径。
其中,所述最大弥散圈直径确定模块31还通过以下方式确定目标图像中像素点p的弥散圈直径:
Figure PCTCN2015070919-appb-000025
其中,DCoC(p)为像素点p的弥散圈直径;depth(p)为像素点p和透镜之间的距离;fd为聚焦平面和透镜之间的距离;f为透镜焦点和透镜之间的距离;D为透镜的直径大小。
具体的,所述最大弥散圈直径确定模块31,还配置为在[8,32]像素范围内选定所述最大弥散圈直径;其中,可以根据实际需要将最大弥散圈直径设置为[8,32]像素范围内的任意值;优选的,可以将最大弥散圈直径设置为16。
具体的,所述采样域确定模块32,配置为根据以下方式确定各个像素点的采样域:
将各个像素点的采样域设置为以各个像素点为圆心,并以目标图像的最大弥散圈直径作为直径的圆形域。
具体的,所述像素点的前景像素点为所述像素点的采样域内相对于目标像素点靠近视点的像素点;所述像素点的背景像素为所述像素点的采样域内相对于目标像素点远离视点的像素点。
在一个实施例中,所述颜色信息确定模块33,具体配置为通过以下方式确定目标像素点p的背景像素点q的权重值Bb(p,q):
Figure PCTCN2015070919-appb-000026
其中,cb为常量;maxDCoC为最大弥散圈直径;DCoC(p)为目标像素点p的弥散圈直径值;δ(p,q)为采样函数,其取值为:
Figure PCTCN2015070919-appb-000027
其中,d(p,q)为目标像素点p和目标像素点p的背景像素点q之间的距离,DCoC(q)为所述背景像素点q的弥散圈直径;
所述颜色信息确定模块33,还配置为通过以下方式确定目标像素点p的前景像素点m的权重值Bf(p,m):
Figure PCTCN2015070919-appb-000028
其中,d(p,m)为像素点p和m之间的距离;σ(m)的取值为像素点m的弥散圈直径的三分之一,即,
Figure PCTCN2015070919-appb-000029
cf为常量;δ(p,m)为采样函数,其取值下式所示:
Figure PCTCN2015070919-appb-000030
其中,DCoC(m)为目标像素点p的前景像素点m的弥散圈直径。
具体的,所述颜色信息确定模块33,配置为通过以下方式确定所述像素点的颜色信息:
Figure PCTCN2015070919-appb-000031
其中,Cf(p)代表像素点p的颜色信息;n代表像素点p的采样域Ω(p)内任意一个像素点(包括点p的前景像素点、点p的背景像素点及点p本身);B(p,n)表示点n相对于点p的权重值;Ci(n)代表点n的颜色信息;其中,目标像素点p的权重值的确定方式与目标像素点p的前景像素点的权重值确定方式相同,即,目标像素点p的权重值B(p,p)通过以下方式确定:
Figure PCTCN2015070919-appb-000032
其中,d(p,p)为像素点p和p之间的距离,取值为0;σ(p)的取值为像素点p的弥散圈直径的三分之一,即,
Figure PCTCN2015070919-appb-000033
cf为常量;δ(p,p)为采样函数,其取值如下式所示:
Figure PCTCN2015070919-appb-000034
其中,DCoC(p)为目标像素点p的弥散圈直径。
即,将目标像素点采样域内所述目标像素点的所有前景像素点、所有背景像素点及目标像素点的颜色信息乘以各自的权重值,将计算结果累加之后,再与目标像素点的颜色信息相加,之后,将最终计算结果除以所有权重值总和,将得到的结果作为目标像素点的颜色信息。
在具体实施过程中,上述最大弥散圈直径确定模块31、采样域确定模块32、颜色信息确定模块33可以由图像处理装置内的中央处理器(CPU,Central Processing Unit)、微处理器(MPU,Micro Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)或可编程逻辑阵列(FPGA,Field-Programmable Gate Array)来实现。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理 设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。

Claims (16)

  1. 一种景深渲染方法,所述方法包括:
    确定目标图像的最大弥散圈直径;
    根据目标图像的所述最大弥散圈直径确定目标图像中各个像素点的采样域;
    对目标图像中的每一个像素点执行以下处理:
    确定所述像素点的采样域内所述像素点的前景像素点和背景像素点的权重值,根据所述像素点的前景像素点和背景像素点的权重值以及颜色信息确定所述像素点的颜色信息。
  2. 根据权利要求1所述的方法,其中,通过以下方式确定目标图像的最大弥散圈直径:
    确定目标图像中各个像素点的弥散圈直径,将所确定的各个像素点的弥散圈直径的最大值确定为目标图像的最大弥散圈直径。
  3. 根据权利要求2所述的方法,其中,通过以下方式确定目标图像中像素点p的弥散圈直径:
    Figure PCTCN2015070919-appb-100001
    其中,所述DCoC(p)为目标像素点p的弥散圈直径值,depth(p)为像素点p和透镜之间的距离;fd为聚焦平面和透镜之间的距离;f为透镜焦点和透镜之间的距离;D为透镜的直径大小。
  4. 根据权利要求1所述的方法,其中,所述最大弥散圈直径在[8,32]像素范围内取值。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述根据目标图像的最大弥散圈直径确定各个像素的采样域,包括:
    将各个像素点的采样域设置为以各个像素点为圆心,并以目标图像的 最大弥散圈直径作为直径的圆形域。
  6. 根据权利要求5所述的方法,其中,所述像素点的前景像素点,为所述像素点的采样域内相对于目标像素点靠近视点的像素点;
    所述像素点的背景像素,为所述像素点的采样域内相对于目标像素点远离视点的像素点。
  7. 根据权利要求1所述的方法,其中,通过以下方式确定目标像素点p的背景像素点q的权重值Bb(p,q):
    Figure PCTCN2015070919-appb-100002
    其中,cb为常量;maxDCoC为最大弥散圈直径;DCoC(p)为目标像素点p的弥散圈直径值;δ(p,q)为采样函数,其取值为:
    Figure PCTCN2015070919-appb-100003
    其中,d(p,q)为目标像素点p和目标像素点p的背景像素点q之间的距离,DCoC(q)为所述背景像素点q的弥散圈直径;
    通过以下方式确定目标像素点p的前景像素点m的权重值Bf(p,m):
    Figure PCTCN2015070919-appb-100004
    其中,d(p,m)为像素点p和m之间的距离;σ(m)的取值为像素点m的弥散圈直径的三分之一,即,
    Figure PCTCN2015070919-appb-100005
    cf为常量;δ(p,m)为采样函数,其取值如下式所示:
    Figure PCTCN2015070919-appb-100006
    其中,DCoC(m)为目标像素点p的前景像素点m的弥散圈直径。
  8. 根据权利要求7所述的方法,其中,通过以下方式确定所述像素点的颜色信息:
    Figure PCTCN2015070919-appb-100007
    其中,Cf(p)代表像素点p的颜色信息;n代表像素点p的采样域Ω(p)内任意一个像素点,所述任意一个像素点包括:点p的前景像素点和点p的背景像素点,以及像素点p本身;B(p,n)表示点n相对于点p的权重值;Ci(n)代表点n的颜色信息。
  9. 一种景深渲染装置,所述装置包括:最大弥散圈直径确定模块、采样域确定模块及颜色信息确定模块;其中,
    所述最大弥散圈直径确定模块,配置为确定目标图像的最大弥散圈直径;
    所述采样域确定模块,配置为根据目标图像的所述最大弥散圈直径确定各个像素点的采样域;
    所述颜色信息确定模块,配置为对目标图像中的每一个像素点执行以下处理:
    确定所述像素点的采样域内所述像素点的前景像素点和背景像素点的权重值,根据所述像素点的前景像素点和背景像素点的权重值以及颜色信息确定所述像素点的颜色信息。
  10. 根据权利要求9所述的装置,其中,所述最大弥散圈直径确定模块配置为通过以下方式确定目标图像的最大弥散圈直径:
    确定目标图像中各个像素点的弥散圈直径,将所确定的各个像素点的弥散圈直径的最大值确定为目标图像的最大弥散圈直径。
  11. 根据权利要求10所述的装置,其中,所述最大弥散圈直径确定模块配置为通过以下方式确定目标图像中像素点p的弥散圈直径:
    Figure PCTCN2015070919-appb-100008
    其中,所述DCoC(p)为目标像素点p的弥散圈直径值,depth(p)为像素点p和透镜之间的距离;fd为聚焦平面和透镜之间的距离;f为透镜焦点和透镜之间的距离;D为透镜的直径大小。
  12. 根据权利要求9所述的装置,其中,所述最大弥散圈直径确定模块,配置为在[8,32]像素范围内选定所述最大弥散圈直径。
  13. 根据权利要求9至12中任一项所述的装置,其中,所述采样域确定模块配置为根据目标图像的最大弥散圈直径确定各个像素点的采样域,包括:
    将各个像素点的采样域设置为以各个像素点为圆心,并以目标图像的最大弥散圈直径作为直径的圆形域。
  14. 根据权利要求13所述的装置,其中,所述像素点的前景像素点,为所述像素点的采样域内相对于目标像素点靠近视点的像素点;
    所述像素点的背景像素,为所述像素点的采样域内相对于目标像素点远离视点的像素点。
  15. 根据权利要求9所述的装置,其中,所述颜色信息确定模块配置为通过以下方式确定目标像素点p的背景像素点q的权重值Bb(p,q):
    Figure PCTCN2015070919-appb-100009
    其中,cb为常量;maxDCoC为最大弥散圈直径;DCoC(p)为目标像素点p的弥散圈直径值;δ(p,q)为采样函数,其取值为:
    Figure PCTCN2015070919-appb-100010
    其中,d(p,q)为目标像素点p和目标像素点p的背景像素点q之间的距离,DCoC(q)为所述背景像素点q的弥散圈直径;
    所述颜色信息确定模块,还配置为通过以下方式确定目标像素点p的 前景像素点m的权重值Bf(p,m):
    Figure PCTCN2015070919-appb-100011
    其中,d(p,m)为像素点p和m之间的距离;σ(m)的取值为像素点m的弥散圈直径的三分之一,即,
    Figure PCTCN2015070919-appb-100012
    cf为常量;δ(p,m)为采样函数,其取值如下式所示:
    Figure PCTCN2015070919-appb-100013
    其中,DCoC(m)为目标像素点p的前景像素点m的弥散圈直径。
  16. 根据权利要求15所述的装置,其中,所述颜色信息确定模块配置为通过以下方式确定所述像素点的颜色信息Cf(p):
    Figure PCTCN2015070919-appb-100014
    其中,Cf(p)代表像素点p的颜色信息;n代表像素点p的采样域Ω(p)内任意一个像素点,所述任意一个像素点包括:点p的前景像素点和点p的背景像素点,以及像素点p本身;B(p,n)表示点n相对于点p的权重值;Ci(n)代表点n的颜色信息。
PCT/CN2015/070919 2014-10-17 2015-01-16 一种景深渲染方法和装置 WO2016058288A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410555040.2 2014-10-17
CN201410555040.2A CN105574818B (zh) 2014-10-17 2014-10-17 一种景深渲染方法和装置

Publications (1)

Publication Number Publication Date
WO2016058288A1 true WO2016058288A1 (zh) 2016-04-21

Family

ID=55746030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/070919 WO2016058288A1 (zh) 2014-10-17 2015-01-16 一种景深渲染方法和装置

Country Status (2)

Country Link
CN (1) CN105574818B (zh)
WO (1) WO2016058288A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242843A (zh) * 2020-01-17 2020-06-05 深圳市商汤科技有限公司 图像虚化方法、图像虚化装置、设备及存储装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370958B (zh) * 2017-08-29 2019-03-29 Oppo广东移动通信有限公司 图像虚化处理方法、装置及拍摄终端

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010095770A1 (ko) * 2009-02-19 2010-08-26 인하대학교 산학협력단 입체영상 가시화를 위한 자동 피사계심도 조절 방법
US7787688B1 (en) * 2006-01-25 2010-08-31 Pixar Interactive depth of field using simulated heat diffusion
CN102750726A (zh) * 2011-11-21 2012-10-24 新奥特(北京)视频技术有限公司 一种基于OpenGL实现景深效果的方法
CN102968814A (zh) * 2012-11-22 2013-03-13 华为技术有限公司 一种图像渲染的方法及设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7453459B2 (en) * 2001-02-26 2008-11-18 Adobe Systems Incorporated Composite rendering 3-D graphical objects
US6975329B2 (en) * 2002-12-09 2005-12-13 Nvidia Corporation Depth-of-field effects using texture lookup
JP6214236B2 (ja) * 2013-03-05 2017-10-18 キヤノン株式会社 画像処理装置、撮像装置、画像処理方法、及びプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7787688B1 (en) * 2006-01-25 2010-08-31 Pixar Interactive depth of field using simulated heat diffusion
WO2010095770A1 (ko) * 2009-02-19 2010-08-26 인하대학교 산학협력단 입체영상 가시화를 위한 자동 피사계심도 조절 방법
CN102750726A (zh) * 2011-11-21 2012-10-24 新奥特(北京)视频技术有限公司 一种基于OpenGL实现景深效果的方法
CN102968814A (zh) * 2012-11-22 2013-03-13 华为技术有限公司 一种图像渲染的方法及设备

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HAMMON, E. J. ET AL.: "GPU Gems 3, chapter 28. Pratical Post-Process Depth of Field.", 31 December 2007 (2007-12-31), XP055274468, Retrieved from the Internet <URL:http://http.Developer,nvidia.com/GPUGems3/gpugems3_ch28.html> *
HILLAIRE, S. ET AL.: "Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments.", IEEE COMPUTER GRAPHICS AND APPLICATIONS, vol. 28, no. 6, 31 December 2008 (2008-12-31), pages 47 - 55, XP011248889 *
HUANG, DAOXIAO ET AL.: "Real-time depth-of-field simulation on GPU", APPLICATION RESEARCH OF COMPUTERS, vol. 25, no. 10, 31 October 2008 (2008-10-31) *
WU, SHANG.: "Research and Implementation of Rapid Depth of Field Rendering Algorithm", MASTER'S THESIS OF SHANGHAI JIAOTONG UNIVERSITY, 31 January 2014 (2014-01-31) *
YU , WEI ET AL.: "SIMPLIFIED ALGORITHM FOR POST-PROCESSING FAST DEPTH OF FIELD EFFECT", COMPUTER APPLICATIONS AND SOFTWARE, vol. 26, no. 10, 31 March 2009 (2009-03-31) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242843A (zh) * 2020-01-17 2020-06-05 深圳市商汤科技有限公司 图像虚化方法、图像虚化装置、设备及存储装置
CN111242843B (zh) * 2020-01-17 2023-07-18 深圳市商汤科技有限公司 图像虚化方法、图像虚化装置、设备及存储装置

Also Published As

Publication number Publication date
CN105574818B (zh) 2020-07-17
CN105574818A (zh) 2016-05-11

Similar Documents

Publication Publication Date Title
US10403036B2 (en) Rendering glasses shadows
US11132544B2 (en) Visual fatigue recognition method, visual fatigue recognition device, virtual reality apparatus and storage medium
JP2022166078A (ja) デジタル媒体と観察者の相互作用の構成及び実現
US9881202B2 (en) Providing visual effects for images
CN108848367B (zh) 一种图像处理的方法、装置及移动终端
KR20220051376A (ko) 메시징 시스템에서의 3d 데이터 생성
TWI777098B (zh) 一種圖像處理方法及裝置、電子設備、儲存介質
KR20120064641A (ko) 영상 처리 장치, 조명 처리 장치 및 그 방법
CN107944420A (zh) 人脸图像的光照处理方法和装置
US20180374258A1 (en) Image generating method, device and computer executable non-volatile storage medium
TWI752473B (zh) 圖像處理方法及裝置、電子設備和電腦可讀儲存媒體
Penner et al. Pre-integrated skin shading
US11823321B2 (en) Denoising techniques suitable for recurrent blurs
Magdics et al. Post-processing NPR effects for video games
CN111199573A (zh) 一种基于增强现实的虚实互反射方法、装置、介质及设备
CN107851309A (zh) 一种图像增强方法及装置
US11288774B2 (en) Image processing method and apparatus, storage medium, and electronic apparatus
CN103093416B (zh) 一种基于图形处理器分区模糊的实时景深模拟方法
WO2016058288A1 (zh) 一种景深渲染方法和装置
CN110473281A (zh) 三维模型的描边处理方法、装置、处理器及终端
CN113223128B (zh) 用于生成图像的方法和装置
Lindeberg Concealing rendering simplifications using gazecontingent depth of field
CN114245907A (zh) 自动曝光的光线追踪
US9563940B2 (en) Smart image enhancements
US20130282344A1 (en) Systems and methods for simulating accessory display on a subject

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15851134

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15851134

Country of ref document: EP

Kind code of ref document: A1