WO2020038407A1 - 一种图像渲染方法、装置及图像处理设备、存储介质 - Google Patents

一种图像渲染方法、装置及图像处理设备、存储介质 Download PDF

Info

Publication number
WO2020038407A1
WO2020038407A1 PCT/CN2019/101802 CN2019101802W WO2020038407A1 WO 2020038407 A1 WO2020038407 A1 WO 2020038407A1 CN 2019101802 W CN2019101802 W CN 2019101802W WO 2020038407 A1 WO2020038407 A1 WO 2020038407A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
region
target
area
Prior art date
Application number
PCT/CN2019/101802
Other languages
English (en)
French (fr)
Inventor
赵瑞祥
李怀哲
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP19850930.9A priority Critical patent/EP3757944A4/en
Publication of WO2020038407A1 publication Critical patent/WO2020038407A1/zh
Priority to US17/066,707 priority patent/US11295528B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/36Level of detail
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Definitions

  • the present application relates to the field of image processing technology, and in particular, to an image rendering method, device, image processing device, and storage medium.
  • VR virtual reality
  • An embodiment of the present application provides an image rendering method, which is executed by an image processing device, and includes: acquiring an initial image of a current scene, and determining a first region and a second region on the initial image; Rendering the image data of the first region in the initial image to obtain a first sub-image; rendering the image data of the second region in the initial image to obtain a second sub-image based on a second rendering rule; The first sub-image and the second sub-image generate a target display image; wherein the first rendering rule and the second rendering rule are different.
  • An embodiment of the present application further provides an image rendering method, which is executed by an image processing device, and includes: obtaining an initial image of a current scene, and determining a first region and a second region on the initial image; The image data of the first region in the initial image is rendered to obtain a first sub-image; the image data of the second region in the initial image is rendered based on a second rendering rule to obtain a second sub-image; A mask layer, the size of the mask layer is the remaining area excluding the first area in the second area; and generating a target display image according to the first sub image, the second sub image, and the mask layer ; Wherein the first rendering rule and the second rendering rule are different.
  • An embodiment of the present application further provides an image rendering device, including: an obtaining unit for obtaining an initial image of a current scene; a determining unit for determining a first region and a second region on the initial image; a rendering unit, Configured to render image data of the first region in the initial image based on a first rendering rule to obtain a first sub-image; and the rendering unit is further configured to perform processing on the initial image based on a second rendering rule. Rendering the image data of the second region to obtain a second sub-image; a generating unit configured to generate a target display image according to the first sub-image and the second sub-image; wherein the first rendering rule and all The second rendering rule is different.
  • An embodiment of the present application further provides an image rendering device, including: an obtaining unit for obtaining an initial image of a current scene; a determining unit for determining a first region and a second region on the initial image; a rendering unit, Configured to render image data of the first region in the initial image based on a first rendering rule to obtain a first sub-image; and the rendering unit is further configured to perform processing on the initial image based on a second rendering rule.
  • a generating unit for generating a mask layer, the size of the mask layer being the remaining area of the second area excluding the first area; the A generating unit is configured to generate a target display image according to the first sub-image, the second sub-image, and the mask layer; wherein the first rendering rule and the second rendering rule are different.
  • An embodiment of the present application further provides an image processing device including: a processor and a memory, where the memory is used to store a computer program, the computer program includes program instructions, and the processor is configured to call the program An instruction to execute the image rendering method described in the embodiment of the present application.
  • An embodiment of the present application further provides a computer storage medium.
  • the computer storage medium stores computer program instructions.
  • the computer program instructions are executed by a processor, the computer program instructions are used to execute the image rendering method described in the embodiments of the present application.
  • FIG. 1 is an application scenario diagram of an image rendering method according to an embodiment of the present application
  • FIG. 2 is a structural diagram of an image rendering process provided by an embodiment of the present application.
  • 3A is a flowchart of an image rendering method according to an embodiment of the present application.
  • step S301 is a specific flowchart of step S301 in the embodiment of the present application.
  • step S302 is a specific flowchart of step S302 in the embodiment of the present application.
  • step S304 is a specific flowchart of step S304 in the embodiment of the present application.
  • step S332 is a specific flowchart of step S332 in the embodiment of the present application.
  • step S304 is another specific flowchart of step S304 in the embodiment of the present application.
  • 3G is a specific flowchart of step S351 in the embodiment of the present application.
  • FIG. 3H is a specific flowchart of step S352 in the embodiment of the present application.
  • FIG. 3I is a specific flowchart of step S353 in the embodiment of the present application.
  • FIG. 4 is a schematic diagram of a relationship between a viewing angle and an image rendering range according to an embodiment of the present application
  • FIG. 5 is a target display image provided by an embodiment of the present application
  • 6A is a schematic diagram of a method for generating a target display image according to an embodiment of the present application.
  • 6B is a schematic diagram of a method for calculating a color value of a pixel point in a gaze area according to an embodiment of the present application
  • FIG. 7A is a schematic diagram of a first area according to an embodiment of the present application.
  • FIG. 7B is a schematic diagram of a method for determining a mixed region according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another method for determining a mixed region according to an embodiment of the present application.
  • 9A is a flowchart of a method for rendering image data of a first region according to an embodiment of the present application.
  • FIG. 9B is a specific flowchart of step S902 in the embodiment of the present application.
  • FIG. 10 is a flowchart of a visual depth of field rendering method provided by an embodiment of the present application.
  • FIG. 11 is a flowchart of visual depth of field rendering of a color image according to a target focal length and a reference focal length according to an embodiment of the present application;
  • 12A is a flowchart of a method for determining a color value of a target pixel according to a target layer according to an embodiment of the present application
  • FIG. 12B is a flowchart of another method for determining a color value of a target pixel according to a target layer according to an embodiment of the present application.
  • FIG. 13A is a flowchart of another image rendering method according to an embodiment of the present application.
  • FIG. 13B is a schematic diagram of another method for generating a target display image provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an image rendering device according to an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of an image processing device according to an embodiment of the present application.
  • the process of rendering the image of the current scene in the head-mounted display device includes: the triangles, texture maps and other images required to render the image in the current scene.
  • the data material is moved to the GPU (Graphics Processing Unit) through the CPU (Central Processing Unit).
  • the GPU renders the image data material through the rendering pipeline to obtain the initial image, and then uses the image rendering post-processing technology to process the initial data.
  • the image is rendered by shading, etc., and finally an image in the current VR scene that can be displayed to the user is obtained.
  • rendering in order to meet the parallax effect requirements of the left and right eyes of the user, rendering needs to be performed twice through the rendering pipeline according to different parameters of the left and right eyes, and then an image conforming to the stereoscopic effect of the parallax of the head-mounted display device is produced.
  • the image refresh rate during rendering must be maintained. For example, in some scenes, a refresh rate of not less than 90fps or higher is required.
  • the quality of VR vision imaging is limited due to the limitation of computing efficiency, reducing the user experience.
  • the embodiment of the present application introduces a gaze-based rendering technology, and splits the initial image in the current scene in the head-mounted display device 101 into a first region (Inset) 11 with a high resolution quality, which can be understood as gaze
  • the area that is, the focus point part that the human eye pays attention to, is a shaded part centered on the gaze point in FIG. 1) and a second area (Outset) 12 (such as the non-shaded part) with relatively low resolution.
  • rendering may be performed at a higher resolution and superimposing a visual depth of field effect
  • rendering process of the image data of the second region 12 at a lower resolution and Render based on image quality parameters that reduce performance usage. In this way, it is possible to generate a clear image superimposed with a visual depth of field effect in the first region 11 and a blurred image in the second region 12.
  • an image quality parameter capable of reducing efficiency usage may be preset.
  • the two sub-images obtained by rendering the two regions are merged into one image and displayed to the user, which can effectively improve the depth of field perception of the human eye, and at the same time It will significantly increase the software and hardware resources consumed by image rendering, and can take into account the performance issues required for image rendering displayed by the head-mounted display device, achieving a higher quality and more immersive head-mounted display device experience.
  • FIG. 2 is a structural diagram of an image rendering process according to an embodiment of the present application.
  • a first region may be obtained.
  • the determination process of the first region may include: determining a fixation point on the initial image, and then according to the fixation point And the target field of view (FOV, Field Of View) to determine the projection matrix, and then obtain the image presentation range, this image presentation range is determined as the first area on the initial image.
  • FOV Field Of View
  • the above field of view angle can also be called the field of view.
  • the size of the field of view determines the field of view of the human eye. The larger the field of view angle, the larger the field of view. Generally, the target object will not be seen by the human eye beyond this angle.
  • the second region on the initial image may be determined according to the initial image and the first region on the initial image.
  • the image data of the first region is rendered based on the first rendering rule to obtain a first sub-image with a visual depth of field effect.
  • the image data of the second region is rendered based on the second rendering rule to obtain a second sub-image.
  • the first sub-image and the second sub-image are fused to obtain a target display image.
  • FIG. 3A is a schematic flowchart of an image rendering method according to an embodiment of the present application.
  • the image rendering method according to the embodiment of the present application may be applied to a virtual reality (VR, Virtual Reality) scene and executed by an image processing device.
  • the image processing device may be a VR host, VR glasses, or other devices capable of performing corresponding image rendering processing. If the image processing device is a VR host, the image rendering method in the embodiment of the present application may be specifically implemented by the VR host, and the VR host sends the image obtained after the rendering process to the VR glasses for display.
  • the image processing device is VR glasses
  • the image rendering method in the embodiment of the present application may also be executed by the VR glasses with a built-in image rendering function and display the finally rendered image.
  • the image rendering method in this embodiment of the present application may also be applied to other application scenarios that require image rendering.
  • an image rendering method includes the following steps:
  • the image processing device when the image rendering method shown in FIG. 3A is used for image rendering, the image processing device first needs to acquire an initial image in the current scene.
  • the initial image may be obtained by acquiring materials such as triangles and texture maps required to render an image in the current scene, and then using a graphics processor (GPU, Graphics Processing Unit) to base the material on the rendering pipeline. Perform rendering to get the initial image.
  • the method for obtaining the initial image may also be obtained by rendering an image in the current scene by using another rendering method, which is not specifically limited in the embodiment of the present application.
  • the image processing device After acquiring the initial image, acquires the initial image in the current scene, and determines a first region and a second region on the initial image.
  • the first region may refer to a part focused by the human eye, that is, the clearest part of the image
  • the second region may refer to a part of the region including the first region on the initial image, or may include a region including the first region on the initial image.
  • the entire initial image area, or the second area may also refer to the remaining area except the first area on the initial image. Because the part focused by the human eye is relatively narrow, the first region only occupies a small part of the current scene, so the first region in the initial image may be significantly smaller than the second region.
  • FIG. 3B is a specific flowchart of the image processing device determining the first region and the second region on the initial image in step S301. As shown in FIG. 3B, step S301 may include the following steps:
  • S311 Perform human eye tracking processing on the target user by using a human eye tracking strategy to determine a fixation point on the initial image;
  • the eye tracking strategy can be used to determine the specific position in the current scene that the target user is watching, and the gaze point on the initial image can be determined based on this position.
  • the FOV and the projection matrix Based on the relationship between the FOV and the projection matrix, one can be preset It is used to determine the target FOV of the first region, and then the target projection matrix is determined according to the target FOV, and then the image presentation range is determined.
  • the region corresponding to the image presentation range is the first region. Therefore, the above-mentioned determination of the first region on the initial image according to the fixation point and the target FOV can be understood as: determining the image presentation range centered on the fixation point as the first region.
  • the image processing device may further expand the image rendering range by a certain size to obtain a first region, such as 10%.
  • the first region includes The area represented by the image rendering range (also referred to as the core area) and the area represented by the enlarged portion of the image rendering range (also referred to as the edge area) are two parts. In this way, when the target display image is generated based on the first sub-image and the second sub-image, the first sub-image and the second sub-image are fused to achieve a natural fusion effect.
  • the shape of the first region may be a circle centered on the fixation point, or the shape of the first region may be a square, rectangle, or other shape centered on the fixation point.
  • the specific shape of the first region is not limited.
  • the first area may also be determined based on the performance of the image processing device and the position of the gaze point on the initial image.
  • the performance parameters of the image processing device such as the GPU memory rate and / or processing frequency and other parameters
  • the FOV are established. Mapping relations.
  • the fixation point on the initial image is first determined, and the performance parameters of the image processing device are detected.
  • the FOV is selected based on the detected performance parameter and the mapping relationship, and the selected FOV is used as the target FOV.
  • the gaze point and the target FOV determine a first region on the initial image.
  • changing the size of the target FOV can generate different projection matrices, and then generate different image rendering ranges, which can also be referred to as image rendering ranges, so that different first regions can be determined (as shown in FIG. 4). ).
  • image rendering ranges which can also be referred to as image rendering ranges
  • FIG. 4 It can be seen from FIG. 4 that the projection matrix generated by a relatively small FOV will produce a relatively small field of vision effect during rendering, and the projection matrix generated by a relatively large FOV will have a relatively large field of vision effect during rendering.
  • FOV The first region 401 defined by the image presentation range corresponding to 58 ° is smaller than the first region 402 defined by the image presentation range corresponding to the FOV of 90 °.
  • the above-mentioned determination of the first region on the initial image based on the fixation point and the target FOV can be understood as: after the fixation point is determined, adjusting the FOV equal to the target FOV to obtain a projection matrix of the target FOV, that is, according to the target FOV Adjust the projection matrix.
  • the image presentation range can be determined on the initial image, and the area within the image presentation range on the initial image is the first region.
  • the second region may refer to a region including the first region on the initial image, or may refer to the entire initial image region including the first region on the initial image, or the second region may also refer to the initial image. Remove the remaining area from the first area. Because the part focused by the human eye is relatively narrow, the first region only occupies a small part of the current scene, so the first region in the initial image may be significantly smaller than the second region.
  • S302 Render image data of the first region in the initial image based on a first rendering rule to obtain a first sub-image.
  • the image data of the first region and the second region are rendered, respectively, and the image processing device performs an initial image based on the first rendering rule.
  • the image data of the first region in the image is rendered to obtain a first sub-image.
  • rendering the image data of the first region in the initial image based on the first rendering rule may be rendering based on the color image data of the first region and the depth image data generated for the first region.
  • the depth image refers to an image that contains distance-related information to the surface of the scene object included in the initial image. It is simply understood that the pixel values in the depth image reflect the distance information between the scene object and the human eye.
  • FIG. 3C is a specific flowchart of the first sub-image rendering by the image processing device in step S302 based on the first rendering rule to render the image data of the first region in the initial image.
  • step S302 includes the following steps:
  • the visual depth of field effect may refer to a visual effect in which an image at a focal distance is clear, and an image outside or within the focal distance is blurred.
  • the implementation of S302 may be: according to the color image data and depth image data of the first region, combined with the first rendering rule to obtain a rendered image with a visual depth of field effect, this rendered image is also the first sub-image.
  • the initial image obtained by initial rendering is a color image
  • the color image data of the first region can be obtained by intercepting the image data in the first region from the initial image.
  • the way to obtain the depth image data of the first region from the initial image may be to use the information of the shader in the GPU rendering program with the Z buffer (Z buffer) to convert the The depth information between the 3D object and the camera is rendered into a depth image data.
  • Z Buffer can also be called a depth buffer, which stores the distance between each pixel on the image and the camera obtained by rasterizing the primitives needed to render the object, that is, depth information.
  • the shader can handle the saturation, brightness, etc. of all pixels, and can also produce blurring, highlighting and other effects.
  • the depth image data can be generated by using the shader in combination with the depth value stored in the ZBuffer.
  • S303 Render the image data of the second region in the initial image based on the second rendering rule to obtain a second sub-image.
  • the image processing device may perform The image data of the second region is rendered to obtain a second sub-image.
  • rendering the image data of the second region in the initial image based on the second rendering rule may be rendering the image data of the second region based on a rendering manner capable of reducing the efficiency usage rate. For example, before rendering the image data of the second region, the target material required for rendering the image data of the second region may be obtained first; then, it may be determined whether there is a material library of the same type but more A low-quality reference material whose material quality is lower than that of the target material. If it exists, the rendering of the image data of the second region is completed using a lower-quality reference material.
  • the step S303 may include: rendering the image data of the second region based on the resolution and image quality parameters indicated by the second rendering rule to obtain a second sub-image.
  • the resolution indicated by the second rendering rule is lower than the resolution of rendering the image data of the first region during the rendering phase of the first region, and the image quality parameter indicated by the second rendering rule is set to be able to reduce Image quality parameters for performance usage.
  • the image quality parameter may refer to a rendering resolution and / or a rendering material quality
  • the image quality parameter indicating reduced performance usage indicated by the second rendering rule may be a lower rendering resolution and a lower quality rendering material.
  • a standard resolution can be set. Under normal processing conditions, the initial image is rendered based on the standard resolution.
  • the first region is rendered with a resolution of a standard resolution
  • the second region is rendered with a resolution lower than the standard resolution
  • using the resolution and image quality parameters indicated by the second rendering rule for rendering can save some performance in the rendering stage of the second region to compensate for some of the performance loss in the rendering stage of the first region.
  • the resource consumption of the image processing device will not increase significantly, even according to the resolution in the first rendering rule and the second rendering rule (for example, the resolution of the first region is the standard resolution mentioned above) and the image.
  • Requirements such as quality parameters make resource consumption of image processing equipment less when rendering in regions, thereby further ensuring the refresh rate.
  • the first region and the second region are rendered separately, and after the first sub-image and the second sub-image are obtained, the image processing device may The image and the second sub-image generate a target display image, and the target display image can be displayed in the current scene of the head-mounted display device.
  • the obtained target display image is composed of two parts.
  • the image of the target user's eye gaze portion is a clear image with a visual depth of field effect (shown as area A in FIG. 5).
  • the clear image area is based on the first area sub-image and the first area.
  • the two-region sub-images are superimposed, and the image of the non-gazing part is a blurred image with a non-visual depth of field effect (as shown in area B in FIG. 5).
  • the blurred image region is obtained based on the second sub-image. This can ensure the operation of image rendering. At the same time, it effectively improves the depth of field effect of the final VR image displayed, which can achieve a more immersive experience.
  • the first sub-image can be directly superimposed on the second sub-image according to the position of the first region corresponding to the first sub-image on the initial image, covering the corresponding area on the second sub-image. Position the image content to get the target display image.
  • the image processing device in order to generate the target display image as shown in FIG. 5, the image processing device needs to use a higher rendering resolution when rendering the first region than when rendering the second region.
  • the change value of the rendering resolution used when rendering the first area and the rendering resolution used when rendering the second area may be determined according to the performance of the image processing device. If the performance of the image processing device is good, you can set two rendering resolutions to make the change between the two rendering resolutions smaller, that is, the resolution used for the first region rendering and the resolution used for the second region rendering. The resolutions are relatively high; if the performance of the image processing equipment is average, two rendering resolutions can be set so that the change between the two rendering resolutions is large, that is, the resolution used for rendering in the second area is relatively low.
  • S304 is based on the first sub-image and the second sub-image.
  • a mask layer is introduced. According to the mask layer and the first sub-image, And the second sub-image to generate a target display image.
  • FIG. 3D is a flowchart of generating a target display image according to the first sub image and the second sub image in step S304 of the embodiment of the present application. As shown in FIG. 3D, step S304 may include the following steps:
  • the target display image includes a fixation area and a non-fixation area.
  • the gaze area is an overlapping area formed by performing layer overlay processing on the first sub-image, the second sub-image, and the mask layer, and the color values of the pixels in the gaze area are based on the The color values of the pixels in the sub-image, the second sub-image, and the area corresponding to the overlapping area in the mask layer are calculated; the color values of the pixels in the non-gaze area are based on the pixels of the second sub-image The point color value is determined.
  • a mask layer is introduced during image fusion.
  • the role of the mask layer is to make the final target display image closer to the position of the gaze point in the target display image. The closer the color value of a pixel is to the color value of the first sub-image, and the further away from the position of the fixation point in the target display image, the closer the color value of the pixel is to the color value of the second sub-image, thus creating a smooth transition effect .
  • the size of the mask layer can be determined according to the size of the first sub-image. In order to achieve the natural combination of the first sub-image and the second sub-image, the size of the mask layer should be equal to or larger than the first sub-image. The size of the child image.
  • the shape of the mask layer may be the same as the shape of the first sub-image, for example, the first sub-image is circular, and the mask layer may be circular; the first sub-image is square, and the mask The layer can also be square.
  • the shape of the mask layer may be different from the shape of the first sub-image, and the image processing device may select an appropriate size and shape of the mask layer according to the needs of the current scene.
  • the image processing device After the mask layer is generated, the image processing device performs layer overlay processing on the first sub image, the second sub image, and the mask layer according to the position of the first sub image in the initial image to generate a target display image.
  • the target display image is composed of two areas: the fixation area and the non-fixation area.
  • the fixation area corresponds to the area where the first sub image, the second sub image, and the mask layer overlap.
  • the non-fixation area refers to the remainder of the second sub image except the overlap area. region.
  • the target display image is obtained by superimposing the first sub-image, the second sub-image, and the mask layer.
  • FIG. 3E shows a specific flowchart of step S332 in the embodiment of the present application. As shown in FIG. 3E, S332 includes the following steps:
  • S342 Superimpose the first sub-image and the mask layer on the superimposed area of the second sub-image to form an overlap area, that is, the attention area of the target display image;
  • S344 Determine the remaining area excluding the gaze area in the second sub-image as the non-gaze area of the target display image
  • FIG. 6A is a schematic diagram of a method for generating a target display image.
  • 601 indicates a mask layer
  • 602 indicates a first sub-image
  • 603 indicates a second sub-image
  • 604 indicates a target display image.
  • the overlapping area of the first sub-image 602 in the second sub-image 603 is a dotted frame portion of 603a
  • 603a is used as the attention area on the target display image
  • the shaded portion 603b in the second sub-image 603 is the non-target display image
  • the target display image is obtained by calculating the color values of each pixel in the gaze area and the non-gaze area.
  • B represents the color value of the target pixel point in the gaze area
  • I represents the color value of the pixel point on the first sub-image having the same image position as the target pixel point
  • O represents the color value of the pixel on the second sub-image.
  • M represents a mask value of a pixel having the same image position as the target pixel on the mask layer, where in the gaze area
  • the target pixel point of is the currently calculated pixel point in the gaze area.
  • FIG. 6B is a schematic diagram of a method for calculating the color value of the pixel point in the fixation area.
  • point B is the target pixel point on the fixation area in the target display image.
  • the color value of the target pixel B in the attention area can be obtained.
  • the non-fixation area of the target display image refers to the remaining area excluding the overlapping area on the second sub-image, that is, the non-fixation area of the target display image belongs to a part of the area on the second sub-image, so the non-fixation area is calculated.
  • the color value of each pixel point in the region may be: the color value of each pixel point on the second sub-image falling into the non-gazing region is used as the color value of each pixel in the non-gazing region.
  • the method of generating the target display image according to the first sub image and the second sub image in S304 may be: superimposing the first sub image on a corresponding position in the second sub image, that is, covering the first sub image An image at a position corresponding to the first region in the second sub-image, so that the target image displayed to the user is a clear image with a visual depth of field effect in the focused portion of the human eye, and a blurred non-visual depth of field effect in the unfocused portion Image.
  • FIG. 3F is a specific flowchart of generating a target display image according to the first sub image and the second sub image in step S304 of the embodiment of the present application.
  • the step S304 may include the following steps:
  • S353 Generate a target display image based on the color value of each pixel in the mixed region, the color value of each pixel in the second region, and the color value of each pixel in the first region.
  • the first sub-image includes an edge region and a core region, where the core region is determined according to the fixation point and the target FOV, and the edge region can be understood as being obtained by expanding the core region by a certain size, such as 10%. It can be set to different values according to different application scenarios, as shown in the first area 711 shown in FIG. 7A, the shaded area is the edge area 712, and the non-shaded area is the core area 703. It can be understood that FIG. 7A is only a schematic diagram of the edge region and the core region in the first region. In fact, the edge region is much smaller than the core region.
  • the first sub-image includes an edge region and a core region.
  • FIG. 3G is a specific flowchart for determining a mixed region according to the second sub-image and the first sub-image as described in step S351 in the embodiment of the present application. As shown in FIG. 3G, step S351 includes the following steps:
  • the mixed region when determining the mixed region, first determine the position where the first sub-image is superimposed on the second sub-image; then determine a reference region in the second sub-image, which can cover the first sub-image. The position in the second sub-image; and then removing the core area in the first sub-image from the reference area, and determining the overlapping part of the remaining first sub-image and the reference area as the mixed area.
  • the size of the reference area may be larger than the first sub-image.
  • the reference area covers a second sub-image in addition to a corresponding area of the first sub-image in the second sub-image.
  • FIG. 7B is a method for determining a mixed region.
  • Region 701 represents a first sub-image
  • region 702 represents a second sub-image
  • region 703 represents a core region in the first sub-image
  • region 704 represents a reference region
  • the size of the reference area may be equal to the size of the first sub-image, and at this time, the reference area may just cover the corresponding area of the first sub-image in the second sub-image.
  • FIG. 8 is another method for determining the mixed region.
  • the region 801 represents both the first sub-image and the reference region
  • the region 802 represents the core region in the first sub-image
  • the region 803 represents the second region sub-image.
  • a region 804 indicates a mixed region.
  • FIG. 3H is a specific flowchart of determining the color value of each pixel point according to the distance of each pixel point from the image center in step S352 in the embodiment of the present application.
  • the step S352 may include the following steps:
  • the color values of each pixel point other than the target pixel in the mixed region are determined in the same way as determining the color value of the target pixel point.
  • FIG. 3I is a specific flowchart of step S353 in the embodiment of the present application. As shown in FIG. 3I, the step S353 may include the following steps:
  • the embodiment of the present application provides the above two implementation methods for S304 to generate a target display image based on the first sub image and the second sub image.
  • the first method is to introduce a mask layer, calculate the mask layer, and The color value of each pixel in the overlapping area and non-overlapping area of the sub-image and the second sub-image, so as to obtain the target display image;
  • the second type introducing a mixed area, according to the mixed area, the first sub-image and the second The color value of each pixel in the sub-image to generate the target display image.
  • an appropriate target display image generation method may be selected from the above two methods according to the resolution of the first sub image and the resolution of the second sub image.
  • the first method may be used to generate the target display image, such as rendering The resolution of the second sub-image is 200 ppi (Pixels, pixels per inch), the resolution of rendering the second sub-image is 160 ppi, and the preset difference is 20, then the first method can be used to generate the target display image; if When the difference between the two resolutions used for rendering the first sub-image and the second sub-image is less than a preset value, the second method may be used to generate the target display image.
  • FIG. 9A is a schematic flowchart of an image rendering method for a first region according to an embodiment of the present application.
  • the method in the embodiment of the present application may correspond to the foregoing S302.
  • the method in the embodiment of the present application can also be executed by an image processing device such as a VR host and VR glasses. As shown in FIG. 9A, the method includes the following steps:
  • the image processing device may generate a reference layer set according to the color image data.
  • the reference layer set includes multiple reference layers, the reference layers have the same image size, and the resolutions between the reference layers are different. And is smaller than the image resolution corresponding to the color image data.
  • the reference layer may be obtained after color image data is subjected to resolution reduction processing according to a preset rule, and then subjected to size processing according to the image size of the color image data, such as simple size enlargement processing.
  • the size of each reference layer obtained is equal to the image size of the color image data, then the size of the reference layer may not be processed; if the color is After the image data is subjected to resolution reduction processing according to a preset rule, the size of each reference layer obtained is smaller than the image size of the color image data. Then, the size of each reference layer can be enlarged to make the size of each reference layer
  • the image size of the color image data is the same. That is, each reference layer can be understood as an image with the same size but different resolution as the color image data of the first region.
  • a reference layer with different blurring degrees and the same image size can be generated according to a preset rule.
  • the color image data is rendered based on the depth image data and the first rendering rule in S902 to obtain a rendered image with a visual depth of field effect.
  • the process of visual depth of field rendering is described in the embodiment corresponding to FIG. 10.
  • the color image data may be rendered based on the depth information reflected by the depth image data and the first rendering rule.
  • rendering the color image data based on the depth information reflected by the depth image data and the first rendering rule may specifically refer to: from the reference layer for each pixel in the color image data according to the depth information reflected by the depth image data Select the target layer to determine the color value of each pixel on the first sub-image that needs to be generated, complete the rendering of each pixel, and thus complete the rendering of the color image data.
  • the reference layer is generated based on the color image data.
  • Each reference layer has the same image size, and the resolution between the reference layers is different and smaller than the image resolution corresponding to the color image data.
  • FIG. 9B is a specific flowchart of step S902 in the embodiment of the present application. As shown in FIG. 9B, step S902 includes the following steps:
  • S914 Perform visual depth of field rendering on the color image data according to the target focal length and the reference focal length to obtain a rendered image with a visual depth of field effect.
  • the pixel value on the depth image data reflects the distance between the surface object of the scene object corresponding to each pixel point in the image in the current scene and the target user's eyes, that is, depth information. Therefore, the depth information of the fixation point pixels can be used as the target focal length of the target user.
  • the reference focal length of the non-fixation point pixels in the color image data may correspond to the pixel value of each non-fixation point pixel in the depth image data.
  • Step S914 includes the following steps:
  • S1101 Determine difference information between the reference focal length and the target focal length of the target pixel in the non-fixation point pixels, where the target pixel point in the non-fixation area is a currently calculated pixel point in the non-fixation area;
  • S1102 Determine the mapping value of the target pixel according to the difference information, and find the target layer from the reference layer set based on the mapping value of the target pixel;
  • S1103 Determine a color value of the target pixel according to a color value of a pixel having the same image position as the target pixel on the target layer.
  • the mapping value may refer to a CoC value, and the mapping value may be an arbitrary number between 0 and 1.
  • the size of the mapping value may reflect the distance between the target pixel and the fixation point pixel in the non-fixation point pixels. The larger the mapping value is, the farther the distance between the target pixel and the fixation point pixel is, and the farther the distance between the target pixel and the fixation point pixel is, the more blurred the image of the target pixel is. Therefore, the larger the mapping value, the lower the sharpness of the target pixel.
  • the method may include: presetting the correspondence between at least one set of reference layers and the mapping value, and then according to the target The mapping value of the pixel finds the target layer corresponding to the mapping value of the target pixel from the preset correspondence.
  • the resolution of the color image data in the first area is 600x600
  • multiple reference layers with resolutions of 300x300, 150x150, and 75x75 are generated according to preset rules.
  • mapping value corresponding to the reference layer with a resolution of 300x300 can be 0.2 ;
  • the reference value corresponding to the reference layer with a resolution of 150x150 may be 0.5;
  • the reference value corresponding to the reference layer with a resolution of 75x75 may be 0.8.
  • the mapping value of the target pixel is 0.5
  • a reference layer with a resolution of 150x150 is selected as the target layer. It should be noted that the setting of the correspondence between the reference layer and the mapping value should follow the corresponding rule that the larger the mapping value, the lower the resolution of the reference layer.
  • the target pixel in step S1102 may be any one of the non-fixed-point pixels
  • the difference information may include a focal distance difference between the reference focal distance of the target pixel and the target focal distance, for example, the target focal distance is f, and the reference of the target pixel is The focal length is f 0 , and the focal length difference between the reference focal length of the target pixel and the target focal length is f 0 -f.
  • the method for determining the mapping value of the target pixel according to the difference information in step S1102 may be: determining the mapping value of the target pixel according to the focal distance difference in the difference information, and the specific method may be: determining at least one set of focal distance difference and After the correspondence between the mapping values is obtained, after obtaining the focal distance difference between the reference focal distance of the target pixel and the target focal distance, the mapping value corresponding to the focal distance difference is found in the above corresponding relationship as the mapping value of the target pixel.
  • the difference information in S1102 may further include a mapping value difference, and the mapping value difference may refer to a difference between a mapping value corresponding to a reference focal distance of the target pixel and a mapping value corresponding to the target focal distance.
  • a correspondence relationship between a set of focal lengths (including a target focal length and a reference focal length) and a mapping value is set in advance; after the target focal length of the fixation point pixel is determined, a mapping corresponding to the target focal length is found according to the preset relationship described above.
  • the mapping value corresponding to the reference focal length is found according to the preset relationship described above; further, the mapping is determined according to the mapping value corresponding to the target focal length and the mapping value corresponding to the reference focal length of the target pixel Value difference.
  • the method for determining the mapping value of the target pixel according to the difference information in step S802 may be to determine the mapping value of the target pixel according to the difference between the mapping values in the difference information. Value as the mapped value of the target pixel.
  • the generation process of the reference layer set mentioned in step S1102 may be: generating a plurality of reference layers with different resolutions but the same size from the color image data of the first region according to a preset rule, and then Group multiple reference images into a reference layer collection. Among them, the resolution of each reference layer in the reference layer is different and is not greater than the resolution of the color image data. The size of each reference layer is the same and is equal to the size of the color image data.
  • the image processing device may specifically include: The color value is used as the color value of the target pixel.
  • the image processing device may specifically include: obtaining at least two target layers having At least two color values are obtained by color values of pixels at the same image position; at least two color values are calculated according to a preset operation rule, and the calculated values are used as the color values of the target pixel.
  • the preset calculation rule may be an average calculation, a weighted average operation, or other calculations, which are not specifically limited in the embodiments of the present application.
  • the number of target layers found in the reference layer set based on the mapping values of the target pixels in S1102 can be determined, and then based on the number of target layers and The color value of a pixel whose target pixel has the same image position determines the color value of the target pixel.
  • the color value of a pixel on the target layer having the same image position as the target pixel is determined as the color value of the target pixel.
  • FIG. 12A a flowchart of a method for determining a color value of a target pixel when the target layer is one.
  • area A represents a first area
  • F represents a fixation point pixel in the first area
  • B represents a first The target pixel among the non-gaze point pixels in the region.
  • the acquired color image data of the first region is an image with a resolution of 600x600
  • the mapping value of the target pixel B determined according to the difference information between the reference focal distance of the target pixel and the target focal distance is expressed as CoC B (CoC B Is an arbitrary number between 0-1).
  • the resolutions can be 300x300, 150x150, 75x75, and 50x50 respectively, and according to a preset reference layer and mapping value.
  • the corresponding relationship finds that the target layer corresponding to CoC B is a layer with a resolution of 75x75 in the reference layer set.
  • a pixel having the same image position as the target pixel B on the target layer, that is, B ′ can be found; the obtained color value at B ′ is used as the color value of the target pixel B.
  • the color values of pixels in the at least two target layers that have the same image position as the target pixels can be obtained respectively, according to a preset The calculation rule of Calculates two color values, and uses the calculated value as the color value of the target pixel.
  • the reference layer with a resolution of 75x75 in the reference layer set and the resolution is For a 50x50 reference layer, as shown in FIG.
  • color image data is rendered based on the depth image data and the first rendering rule to obtain a rendered image with a visual depth of field effect.
  • the eye gaze point of the target user must be determined.
  • the position of the fixation point is the part where the eye is focused and the clearest part of the rendered image.
  • the depth image data of the first area generated in advance can be used to query the depth information of the fixation point to calculate the target focal length.
  • the mapping value of each non-fixation point pixel can be calculated through the difference between the depth information of each non-fixation point pixel and the target focal length.
  • the mapping value is used as a reference basis, and the reference layer corresponding to each non-fixation point pixel is queried to determine the color value of each non-fixation point pixel. For example, for a target pixel in a non-fixation point pixel, if the mapping value of the target pixel is larger, a reference layer with a lower resolution in advance is queried, and the reference layer with the same image position as the target pixel is searched. The color value of the pixel is used as the color value of the target pixel.
  • FIG. 13A is a flowchart of another image rendering method provided by an embodiment of the present application. As shown in FIG. 13A, the following steps are included:
  • Step S1301 obtaining an initial image of the current scene, and determining a first region and a second region on the initial image;
  • Step S1302 rendering image data of the first region in the initial image based on a first rendering rule to obtain a first sub-image
  • Step S1303 Render image data of the second region in the initial image based on a second rendering rule to obtain a second sub-image
  • step S1304 a mask layer is generated, and the size of the mask layer is the remaining area of the second area excluding the first area.
  • the size of the mask layer may be determined according to the size of the remaining area. In order to achieve the natural combination of the first sub-image and the second sub-image, the size of the mask layer may also be equal to or larger than that of the remaining area. size.
  • the shape of the mask layer may be the shape of the second area except the remaining area except the first area, such as a ring shape, or a "back" shape.
  • Step S1305 Generate a target display image based on the first sub-image, the second sub-image, and the mask layer; wherein the first rendering rule and the second rendering rule are different.
  • the first sub-image, the second sub-image, and the mask are grouped according to a position of the first sub-image in the initial image and a size of the mask layer.
  • the layer performs layer overlay processing to generate a target display image, where the target display image includes a gaze area and a non-gaze area; wherein the color values of the pixels in the gaze area are based on the pixels of the first sub-image
  • the non-gazing area is an overlapping area formed after the second area is overlapped with the mask layer, and the color values of the pixels of the non-gazing area are based on the second
  • the color values of the sub-images and the pixel points of the mask layer are determined.
  • FIG. 13B is a schematic diagram of another method for generating a target display image provided by an embodiment of the present application.
  • a ring mask layer 1301 is generated, and the size of the inner ring region is the same as the size of the first sub-image 1302.
  • the outer ring radius value of the ring is R, and the inner ring radius value of the ring is r.
  • the second sub-image 1303 is superimposed on the mask layer 1301 to form a superimposed region 1303a and a non-superimposed region 1303b.
  • the area corresponding to the first sub-image 1302 is the fixation area 1304a, and the remaining area excluding the fixation area 1304a in the superimposed second sub-image 1303 is determined as the non-fixation area 1304b of the target display image. Rendering the gaze region and the non-gaze region according to the color value of each pixel point in the gaze region and the color value of each pixel point in the non-gaze region to obtain a target display image 1304.
  • B represents the color value of the target pixel in the non-gazing area
  • O represents the color value of the pixel on the second sub-image having the same image position as the target pixel
  • M represents the mask layer A mask value of a pixel point having the same image position as the target pixel point, wherein the target pixel point in the gaze area is a currently calculated pixel point in the gaze area.
  • the embodiment of the present application further provides a schematic block diagram of a structure of an image rendering device shown in FIG. 14.
  • the image rendering device in the embodiment of the present application includes an obtaining unit 1401, a determining unit 1402, a rendering unit 1403, and a generating unit 1404.
  • the image rendering device may also be set as needed.
  • the obtaining unit 1401 is configured to obtain an initial image of the current scene; the determining unit 1402 is configured to determine a first region and a second region on the initial image; and the rendering unit 1403 is configured to perform an initial image based on the first rendering rule.
  • the image data of the first area is rendered to obtain a first sub-image; the rendering unit 1403 is further configured to render the image data of the second area in the initial image data based on the second rendering rule to obtain a second sub-image;
  • a generating unit 1404 is used to generate a target display image according to the first sub-image and the second sub-image.
  • the implementation manner of the determining unit 1402 for determining the first region on the initial image may be: performing eye tracking processing on the target user by using a human eye tracking strategy to determine a gaze point on the initial image; according to The gaze point and the target field angle FOV determine a first region on the initial image.
  • the first rendering rule and the second rendering rule are different.
  • the rendering unit 1303 implements The method may be: obtaining color image data of the first area from the initial image, and obtaining depth image data of the first area; and based on the depth image data and the first rendering rule, The color image data is rendered to obtain a rendered image with a visual depth of field effect; and the obtained rendered image is used as the first sub-image.
  • rendering the color image data to obtain a rendered image with a visual depth of field effect may be: from the color image data Determining a fixation point pixel; determining depth information of the fixation point pixel according to the depth image data, and determining a target focal length about the target user based on the depth information of the fixation point pixel; according to the target focal length and the Depth image data determines a reference focal length of non-attention point pixels in the color image data; and performs visual depth of field rendering on the color image data according to the target focal length and the reference focal length to obtain a rendered image with a visual depth of field effect.
  • the reference layer set includes multiple reference layers; the reference layers have the same image size, and the resolutions between the reference layers are different and smaller than the image resolution corresponding to the color image data.
  • a visual depth of field rendering of the color image data according to the target focal length and the reference focal length to obtain a rendered image with a visual depth of field effect may be as follows: Difference information between a reference focal length of the target pixel and the target focal length; determining a mapping value of the target pixel according to the difference information, and finding a target layer from a reference layer set based on the mapping value of the target pixel; The color value of the target pixel is determined according to the color value of a pixel having the same image position as the target pixel on the target layer.
  • the number of the target layer is one, and according to the color value of a pixel on the target layer having the same image position as the target pixel, an implementation manner of determining the color value of the target pixel may be: The color value of a pixel having the same image position as the target pixel on the target layer is used as the color value of the target pixel.
  • the number of the target layer is at least two, and according to the color value of a pixel on the target layer having the same image position as the target pixel, the implementation of determining the color value of the target pixel may be To obtain color values of pixels in the at least two target layers having the same image position as the target pixel to obtain at least two color values; calculate the at least two color values according to a preset operation rule, and calculate the calculated value. The value of is used as the color value of the target pixel.
  • the rendering unit 1403 is configured to render the image data of the second region in the initial image based on the second rendering rule, and an implementation manner of obtaining the second sub-image may be based on the second rendering rule:
  • the resolution and image quality parameters indicated by the rendering rule render image data of the second region to obtain a second sub-image.
  • the specific manner for the generating unit 1404 to generate the target display image based on the first sub-image and the second sub-image is: generating a mask layer; Position in the image, performing layer overlay processing on the first sub image, the second sub image, and the mask layer to generate a target display image, where the target display image includes a gaze area and a non-gaze area;
  • the gaze area is an overlapping area formed by performing layer overlay processing on the first sub-image, the second sub-image, and the mask layer, and a color value of a pixel in the gaze area Is calculated based on the color values of the pixels in the first sub-image, the second sub-image, and a partial region corresponding to the overlap region in the mask layer; the pixel values in the non-gazing region
  • the color value is determined according to the pixel color value of the second sub-image.
  • a color value of a pixel having the same image position as the point, and M represents a mask value of a pixel having the same image position as the target pixel on the mask layer.
  • the determining unit 1402 determines the first region and the second region on the initial image. Further, the rendering unit 1403 is based on the first rendering, respectively.
  • the rule and the second rendering rule render the image data of the first region and the image data of the second region to obtain a first sub-image and a second sub-image, so that the generating unit 1404 generates a target according to the first sub-image and the second sub-image Display the image and realize targeted image rendering by region.
  • the functions of the image rendering device acquisition unit 1401, the determination unit 1402, the rendering unit 1403, and the generation unit 1404 shown in FIG. 14 may further include the following:
  • An obtaining unit 1401, configured to obtain an initial image of a current scene
  • a determining unit 1402 configured to determine a first region and a second region on the initial image
  • a rendering unit 1403, configured to render image data of the first region in the initial image based on a first rendering rule to obtain a first sub-image
  • a rendering unit 1403, configured to render image data of the second region in the initial image based on a second rendering rule to obtain a second sub-image
  • a generating unit 1404 configured to generate a mask layer, the size of the mask layer being a remaining area excluding the first area in the second area;
  • a generating unit 1404 is configured to generate a target display image according to the first sub-image, the second sub-image, and the mask layer; wherein the first rendering rule and the second rendering rule are different.
  • the generating unit 1404 converts the first sub-image, the second sub-image, and the image according to a position of the first sub-image in the initial image and a size of the mask layer.
  • the mask layer performs layer overlay processing to generate a target display image, where the target display image includes a fixation area and a non-fixation area; wherein the color values of the pixels in the fixation area are based on the first sub-image
  • the color value of the pixel point of the pixel is calculated; the non-gazing area is an overlapping area formed after the second area is overlapped with the mask layer, and the color value of the pixel point of the non-gazing area is based on the The second sub-image and the pixel color values of the mask layer are determined.
  • the generating unit 1404 determines a fixation area corresponding to the first sub-image; calculates a color value of each pixel point on the fixation area of the target display image; and superimposes the mask on the second sub-image.
  • a mask layer determining the remaining area excluding the fixation area in the superimposed second sub-image as the non-fixation area of the target display image; calculating the color value of each pixel in the non-fixation area; and according to each of the fixation areas
  • the color value of the pixel point and the color value of each pixel point in the non-gazing area render the gaze area and the non-gazing area to obtain the target display image.
  • FIG. 15 is a schematic block diagram of a structure of an image processing device according to an embodiment of the present application.
  • the image processing device shown in FIG. 15 may include one or more processors 1501 and one or more memories 1502.
  • the processor 1501 and the memory 1502 are connected through a bus 1503.
  • the memory 1502 is configured to store a computer program, where the computer program includes program instructions, and the processor 1501 is configured to execute the program instructions stored in the memory 1502.
  • the memory 1502 may include a volatile memory (such as a random-access memory (RAM); the memory 1502 may also include a non-volatile memory (such as a flash memory) (flash memory), solid state drive (SSD), etc .; the memory 1502 may further include a combination of the above types of memories.
  • RAM random-access memory
  • non-volatile memory such as a flash memory
  • SSD solid state drive
  • the processor 1501 may be a central processing unit CPU.
  • the processor 1501 may further include a hardware chip.
  • the above hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or the like.
  • the PLD may be a field-programmable gate array (FPGA), a generic array logic (GAL), or the like.
  • the processor 1501 may be a combination of the above structures.
  • the memory 1502 is configured to store a computer program, and the computer program includes program instructions, and the processor 1501 is configured to execute the program instructions stored in the memory 1502 to implement steps of a corresponding method in the foregoing embodiments.
  • the processor 1501 is configured to call the program instructions for: obtaining an initial image of the current scene; determining a first region and a second region on the initial image; The image data of the first region is rendered to obtain a first sub-image; the image data of the second region in the initial image data is rendered based on the second rendering rule to obtain a second sub-image; according to the first sub-image and the second sub-image Generate a target display image.
  • the implementation manner of the processor 1501 for determining the first region on the initial image may be: performing eye tracking processing on the target user by using a human eye tracking strategy to determine a gaze point on the initial image; The first region on the initial image is determined according to the gaze point and the target field angle FOV.
  • the first rendering rule and the second rendering rule are different.
  • An implementation manner may be: obtaining color image data of the first region from the initial image, and obtaining depth image data of the first region; based on the depth image data and the first rendering rule, The color image data is rendered to obtain a rendered image with a visual depth of field effect; and the obtained rendered image is used as the first sub-image.
  • the implementation of the processor 1501 for rendering the color image data based on the depth image data and the first rendering rule to obtain a rendered image with a visual depth of field effect may be: Fixation point pixels are determined in the color image data; depth information of the fixation point pixels is determined according to the depth image data; and a target focal distance about the target user is determined based on the depth information of the fixation point pixels; The target focal length and the depth image data determine a reference focal length of non-fixation point pixels in the color image data; and performing visual depth of field rendering on the color image data according to the target focal length and the reference focal length to obtain a visual depth of field effect Rendered image.
  • the reference layer set includes multiple reference layers; the reference layers have the same image size, and the resolutions between the reference layers are different and smaller than the image resolution corresponding to the color image data.
  • the processor 1501 may perform a visual depth of field rendering on the color image data according to the target focal length and the reference focal length to obtain a rendered image with a visual depth of field effect.
  • An implementation manner may be: determining the Difference information between the reference focal length of the target pixel in the non-fixed-point pixel and the target focal length; determining a mapping value of the target pixel according to the difference information, and collecting from the reference layer set based on the mapping value of the target pixel Searching for a target layer; determining a color value of the target pixel according to the color value of a pixel on the target layer having the same image position as the target pixel.
  • the number of the target layer is one, and the processor 1501 is configured to determine the target pixel's value according to the color value of a pixel on the target layer having the same image position as the target pixel.
  • An embodiment of the color value may be: using the color value of a pixel on the target layer having the same image position as the target pixel as the color value of the target pixel.
  • the number of the target layer is at least two, and the processor 1501 determines the target according to a color value of a pixel on the target layer having the same image position as the target pixel.
  • the implementation of the color values of the pixels may be: obtaining color values of pixels in at least two target layers having the same image position as the target pixel to obtain at least two color values; and performing at least two color values according to a preset operation rule. The color value is calculated, and the calculated value is used as the color value of the target pixel.
  • the processor 1501 is configured to render the image data of the second region in the initial image based on a second rendering rule, and an implementation manner of obtaining a second sub-image may be based on: The resolution and image quality parameters indicated by the second rendering rule render image data of the second region to obtain a second sub-image.
  • the specific method for the processor 1501 to generate the target display image according to the first sub-image and the second sub-image is: generating a mask layer; Position in the image, performing layer overlay processing on the first sub image, the second sub image, and the mask layer to generate a target display image, where the target display image includes a gaze area and a non-gaze area;
  • the gaze area is an overlapping area formed by performing layer overlay processing on the first sub-image, the second sub-image, and the mask layer, and a color value of a pixel in the gaze area Is calculated based on the color values of the pixels in the first sub-image, the second sub-image, and a partial region corresponding to the overlap region in the mask layer; the pixel values in the non-gazing region
  • the color value is determined according to the pixel color value of the second sub-image.
  • a color value of a pixel having the same image position as the point, and M represents a mask value of a pixel having the same image position as the target pixel on the mask layer.
  • the program can be stored in a computer-readable storage medium.
  • the program When executed, the processes of the embodiments of the methods described above may be included.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random, Access Memory, RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例公开了一种图像渲染方法、装置及图像处理设备、存储介质。其中方法包括:获取到当前场景内的初始图像之后,确定初始图像上的第一区域和第二区域,进一步的,基于第一渲染规则对第一区域的图像数据进行渲染得到第一子图像,并基于第二渲染规则对第二区域的图像数据进行渲染得到第二子图像,最后根据第一子图像和第二子图像生成目标展示图像。

Description

一种图像渲染方法、装置及图像处理设备、存储介质
本申请要求于2018年8月21日提交中国专利局、申请号为201810954469.7、名称为“一种图像渲染方法、装置及图像处理设备、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像渲染方法、装置及图像处理设备、存储介质。
背景
在当今的信息时代,随着电子技术和计算机技术的快速发展,虚拟现实(Virtual Reality,VR)作为视觉仿真与计算机图像等多种技术的集合被应用在越来越多的行业中,比如VR游戏行业,医疗行业以及教育行业等。
技术内容
本申请实施例提供了一种图像渲染方法,由图像处理设备执行,包括:获取当前场景的初始图像,并确定所述初始图像上的第一区域和第二区域;基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像;基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像;根据所述第一子图像和所述第二子图像生成目标展示图像;其中,所述第一渲染规则和所述第二渲染规则不相同。
本申请实施例还一种图像渲染方法,由图像处理设备执行,包括:获取当前场景的初始图像,并确定所述初始图像上的第一区域和第二区域;基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像;基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像;生成遮罩图层,所述遮罩图层的大小为第二区域中除去第一区域的剩余区域;根据所述第一子图像、所述第二子图像以及所述遮罩图层生成目标展示图像;其中,所述第一渲染规则和所述第二渲染规则不相同。
本申请实施例还提供了一种图像渲染装置,包括:获取单元,用于获取当前场景的初始图像;确定单元,用于确定所述初始图像上的第一区域和第二区域;渲染单元,用于基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像;所述渲染单元,还用于基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像;生成单元,用于根据所述第一子图像和所述第二子图像生成目标展示图像;其中,所述第一渲染规则和所述第二渲染规则不相同。
本申请实施例还提供了一种图像渲染装置,包括:获取单元,用于获取当前场景的初始图像;确定单元,用于确定所述初始图像上的第一区域和第二区域;渲染单元,用于基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像;所述渲染单元,还用于基于第二渲染规则对所述初始 图像中所述第二区域的图像数据进行渲染,得到第二子图像;生成单元,用于生成遮罩图层,所述遮罩图层的大小为第二区域中除去第一区域的剩余区域;所述生成单元,用于根据所述第一子图像、所述第二子图像以及所述遮罩图层生成目标展示图像;其中,所述第一渲染规则和所述第二渲染规则不相同。
本申请实施例还提供了一种图像处理设备,包括:包括处理器和存储器,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行本申请实施例所述的图像渲染方法。
本申请实施例还提供了一种计算机存储介质,该计算机存储介质中存储有计算机程序指令,该计算机程序指令被处理器执行时,用于执行本申请实施例所述的图像渲染方法。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种图像渲染方法的应用场景图;
图2是本申请实施例提供的一种图像渲染流程的架构图;
图3A是本申请实施例提供的一种图像渲染方法的流程图;
图3B为本申请实施例的步骤S301的具体流程图;
图3C为本申请实施例的步骤S302的具体流程图;
图3D为本申请实施例的步骤S304的具体流程图;
图3E为本申请实施例的步骤S332的具体流程图;
图3F为本申请实施例的步骤S304的另一具体流程图;
图3G为本申请实施例的步骤S351的具体流程图;
图3H为本申请实施例的步骤S352的具体流程图;
图3I为本申请实施例的步骤S353的具体流程图;
图4是本申请实施例提供的一种视场角与图像渲染范围关系的示意图;图5是本申请实施例提供的一种目标展示图像;
图6A是本申请实施例提供的一种目标展示图像生成方法示意图;
图6B是本申请实施例提供的一种注视区域上像素点的颜色值计算方法示意图;
图7A是本申请实施例提供的一种第一区域的示意图;
图7B是本申请实施例提供的一种确定混合区域的方法示意图;
图8是本申请实施例提供的另一种确定混合区域的方法示意图;
图9A是本申请实施例提供的一种对第一区域的图像数据渲染方法流程图;
图9B为本申请实施例的步骤S902的具体流程图;
图10是本申请实施例提供的一种视觉景深渲染方法的流程图;
图11是本申请实施例提供的一种根据目标焦距和参考焦距对彩色图像进行视觉景深渲染的流程图;
图12A是本申请实施例提供的一种根据目标图层确定目标像素的颜色值的方法流程图;
图12B是本申请实施例提供的另一种根据目标图层确定目标像素的颜色值的方法流程图;
图13A是本申请实施例提供的另一种图像渲染方法的流程图;
图13B是本申请本申请实施例提供的另一种目标展示图像生成方法示意图;
图14是本申请实施例提供的一种图像渲染装置的结构示意图;
图15是本申请实施例提供的一种图像处理设备的结构示意图。
实施方式
在研究用户对头戴式显示装置中图像的观感体验时发现,对头戴式显示装置内当前场景的图像进行渲染的流程包括:将渲染当前场景内的图像所需要的三角形、材质贴图等图像数据素材通过CPU(Central Processing Unit,中央处理器)搬移至GPU(Graphics Processing Unit,图形处理器),GPU通过渲染管线对图像数据素材进行渲染,得到初始图像,然后运用图像渲染后处理技术对初始图像进行着色等渲染,最后得到能够展示给用户的当前VR场景下的图像。在渲染过程中,针对用户左右眼的视差效果需求,在渲染时需要按照左右眼不同的参数经过渲染管线进行两次渲染,进而产出符合头戴式显示装置视差立体效果的图像。
在对VR影像的渲染过程中,为了保证用户不产生眩晕不舒服的感觉,必须保持渲染过程中影像刷新率,例如在一些场景中需要不低于90fps甚至更高的刷新率,如此一来导致VR视觉成像的质量受到运算效率的限制而减低,降低用户体验。
在对初始图像的后处理过程中,经过研究发现,透过人眼水晶体的变焦处理会产生在聚焦点处的图像非常清晰,但在非聚焦点处的图像比较模糊的景深效果。
因此,本申请实施例引入了基于注视点的渲染技术,将头戴式显示装置101中当前场景内的初始图像拆分为解析质量比较高的第一区域(Inset)11,(可以理解为注视区域,也即人眼关注的聚焦点部分,如图1中以注视点为中心的阴影部分)和解析质量相对较低的第二区域(Outset)12(如图1非阴影部分)。其中,在第一区域11的图像数据的渲染过程中,可以按照较高的分辨率进行渲染并叠加视觉景深效果,而在第二区域12的图像数据的渲染过程中,按照较低分辨率且基于能够降低效能使用率的影像质量参数进行渲染。由此可实现在第一区域11产生清晰的并且叠加了视觉景深效果的图像,在第二区域12产生模糊的图像。在一个实施例中,可以预设能够降低效能使用率的图像质量参数。
完成第一区域的图像渲染和第二区域的图像渲染后,再将对两个区域渲染得到的两个子图像融合为一个图像显示给用户,这样能够有效的提升人眼的景深观感,同时也不会明显提高图像渲染所消耗的软硬件资源,能兼顾到头戴式显示装置显示的图像渲染所需的效能问题,达到更高质量及更沉浸的头戴式显示装置体验。
请参见图2,为本申请实施例提供的图像渲染流程的架构图。在图2所示的图像渲染流程架构图中,获取到当前场景的初始图像之后,可以得到第一区域,该第一区域的确定过程可以包括:确定初始图像上的注视点,然后根据注视点和目标视场角(FOV,Field Of View)确定出投影矩阵,进而得到影像呈现范围,将此影像呈现范围确定为初始图像上的第一区域。
上述视场角又可以叫做视场,视场角的大小决定了人眼的视野范围,视场角 越大,视野就越大。一般来说,目标物体超过这个角就不会被人眼看到。
进一步地,根据初始图像和初始图像上的第一区域可确定出初始图像上的第二区域。对于第一区域,基于第一渲染规则对第一区域的图像数据进行渲染,得到具有视觉景深效果的第一子图像。对于第二区域,基于第二渲染规则对第二区域的图像数据进行渲染,得到第二子图像。最后将第一子图像和第二子图像进行融合得到目标展示图像。
再请参见图3A,是本申请实施例的一种图像渲染方法的流程示意图,本申请实施例的图像渲染方法可以应用在虚拟现实(VR,Virtual Reality)场景中,由图像处理设备来执行,所述图像处理设备可以是VR主机、VR眼镜、或者其他能够进行相应的图像渲染处理的设备。如果图像处理设备是VR主机,本申请实施例的图像渲染方法具体可以由VR主机来实现,由VR主机将渲染处理后得到的图像发送给VR眼镜显示。当然,如果图像处理设备是VR眼镜,本申请实施例的图像渲染方法也可以由自带图像渲染功能的VR眼镜来执行并显示最终渲染得到的图像。在其他实施例中,本申请实施例的图像渲染方法还可应用在其他的需要进行图像渲染的应用场景中。
如图3A所示,本申请实施例的一种图像渲染方法包括以下步骤:
S301,获取当前场景内的初始图像,并确定初始图像上的第一区域和第二区域。
在一些实施例中,在利用图3A所示的图像渲染方法进行图像渲染时,图像处理设备首先需要获取当前场景内的初始图像。在一个实施例中,初始图像的获取方式可以为:获取渲染当前场景内的图像所需的三角形与材质贴图等素材,再通过图形处理器(GPU,Graphics Processing Unit)将所述素材基于渲染管线进行渲染,得到初始图像。或者初始图像的获取方式还可以是通过其他渲染方式对当前场景内的图像进行渲染得到的,本申请实施例中不做具体限定。
在获取到初始图像后,图像处理设备获取当前场景内的初始图像,并确定初始图像上的第一区域和第二区域。其中,第一区域可指人眼聚焦的部分,也即图像最清晰的部分,第二区域可指在初始图像上包含第一区域的一部分区域,或者可指在初始图像上包含第一区域的整个初始图像区域,或者第二区域也可指在初始图像上除去第一区域的剩余区域。由于人眼聚焦的部分比较狭小,第一区域只占当前场景的一小部分,所以在初始图像中的第一区域可以明显小于第二区域。
图3B为步骤S301中图像处理设备确定初始图像上的第一区域和第二区域的具体流程图。如图3B所示,步骤S301可以包括以下步骤:
S311,利用人眼追踪策略对目标用户进行人眼跟踪处理,确定初始图像上的注视点;
S312,根据注视点和目标视场角FOV确定在初始图像上的第一区域;
可以理解的,利用人眼追踪策略可以确定目标用户正注视着的当前场景中的具体位置,根据该位置即可确定出初始图像上的注视点;基于FOV与投影矩阵的关系,可预设一个用来确定第一区域的目标FOV,进而根据该目标FOV来确定目标投影矩阵,再确定图像呈现范围,该图像呈现范围对应的区域即为第一区域。因此,上述根据注视点和目标FOV确定初始图像上的第一区域可以理解为:将以注视点为中心的图像呈现范围确定为第一区域。
再一个实施例中,图像处理设备在根据注视点和目标FOV确定了图像呈现范围之后,还可以将图像呈现范围扩大一定尺寸后得到第一区域,比如10%,此 时第一区域便包括了图像呈现范围所表示的区域(也称为核心区域)和基于图像呈现范围的扩大部分表示的区域(也称为边缘区域)两部分。如此以便于后续根据第一子图像和第二子图像生成目标展示图像时,对第一子图像和第二子图像进行融合处理,达到自然融合的效果。在一个实施例中,第一区域的形状可以是以注视点为中心的圆形,或者第一区域的形状也可以是以注视点为中心的正方形、长方形或者其他形状,在本申请实施例中对第一区域的具体形状不做限定。
在其他实施例中,第一区域也可以基于图像处理设备的性能和初始图像上注视点的位置来确定,图像处理设备的性能越高,选取的目标FOV可以越大,得到的第一区域也越大。也就是说,图像处理设备的性能参数与FOV、第一区域成正比。在一些实施例中,在实际使用过程中,以保证刷新率为90fps或其他刷新率为前提,建立图像处理设备的性能参数(例如GPU的显存速率、和/或处理频率等参数)与FOV的映射关系。后续在图像处理设备运行后,首先确定初始图像上的注视点,并检测图像处理设备的性能参数,根据检测到的性能参数和所述映射关系选取FOV,将选取的FOV作为目标FOV,进而根据注视点和目标FOV确定在初始图像上的第一区域。
在一个实施例中,改变目标FOV的大小可以产生不同的投影矩阵,进而生成不同的图像呈现范围,也可以称作图像渲染范围,从而便可以确定出不同的第一区域(如图4所示)。由图4可知,比较小的FOV生成的投影矩阵在渲染时会产生比较小的视野效果,比较大的FOV生成的投影矩阵在渲染时会有比较大的视野效果,比如在图4中,FOV为58°对应的图像呈现范围所限定的第一区域401小于FOV为90°对应的图像呈现范围所限定的第一区域402。
上述提及的根据注视点和目标FOV确定在初始图像上的第一区域可理解为:确定了注视点之后,调节FOV等于所述目标FOV,得到目标FOV的投影矩阵,即:可以根据目标FOV调节投影矩阵。在得到目标FOV的投影矩阵后,即可在初始图像上确定出图像呈现范围,初始图像上该图像呈现范围内的区域即为第一区域。
S313,根据所述初始图像和所述第一区域,确定出所述初始图像上的第二区域。
在一些实施例中,第二区域可指在初始图像上包含第一区域的一部分区域,或者可指在初始图像上包含第一区域的整个初始图像区域,或者第二区域也可指在初始图像上除去第一区域的剩余区域。由于人眼聚焦的部分比较狭小,第一区域只占当前场景的一小部分,所以在初始图像中的第一区域可以明显小于第二区域。
S302,基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像。
在一些实施例中,确定了当前场景的初始图像上的第一区域和第二区域之后,分别对第一区域和第二区域的图像数据进行渲染,图像处理设备基于第一渲染规则对初始图像中第一区域的图像数据进行渲染,得到第一子图像。在一个实施例中,基于第一渲染规则对初始图像中第一区域的图像数据进行渲染可以是基于该第一区域的彩色图像数据和为该第一区域生成的深度图像数据来进行渲染。其中,深度图像是指:包含了到初始图像中所包含的场景对象表面的距离相关信息的图像,简单理解,深度图像中的像素值反映场景对象到人眼之间的距离信息。
图3C为步骤S302中图像处理设备基于第一渲染规则对初始图像中第一区域 的图像数据进行渲染,得到第一子图像的具体流程图。如图3C所示,步骤S302包括以下步骤:
S321,从初始图像中获取第一区域的彩色图像数据,并获取第一区域的深度图像数据;
S322,基于深度图像数据和第一渲染规则,对彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像;
S323,将得到的渲染图像作为第一子图像。
在一些实施例中,视觉景深效果可指在焦距距离处的图像清晰,在焦距距离之外或之内的图像模糊的视觉效果。换句话说,S302的实现方式可以是:根据第一区域的彩色图像数据和深度图像数据,结合第一渲染规则得到具有视觉景深效果的渲染图像,此渲染图像也即第一子图像。
在一个实施例中,初始渲染得到的初始图像本身即为一个彩色图像,从初始图像中截取第一区域内的图像数据即可得到第一区域的彩色图像数据。在一个实施例中,从初始图像中获取第一区域的深度图像数据的方式可为利用GPU渲染程序中的着色器(shader)搭配Z缓存(Z Buffer)的信息将虚拟摄像机可视范围内的3D物件与摄像机之间的深度信息渲染出来而成一个深度图像数据。可以理解的,Z Buffer也可称作深度缓存,其中存储了对渲染物件所需的图元进行光栅化得到的图像上每个像素到摄像机之间的距离,也就是深度信息,GPU渲染程序中的shader可以处理所有像素的饱和度、明暗等,并还可以产生模糊、高光等效果。在绘制深度图像数据时,利用shader结合Z Buffer中存储的深度值便可产生深度图像数据。
S303,基于第二渲染规则对初始图像中第二区域的图像数据进行渲染,得到第二子图像。
在一个实施例中,确定了当前场景的初始图像上的第一区域和第二区域之后,对第二区域的图像数据进行渲染时,所述图像处理设备可基于第二渲染规则对初始图像中第二区域的图像数据进行渲染,得到第二子图像。在一个实施例中,基于第二渲染规则对初始图像中第二区域的图像数据进行渲染可以是基于能够降低效能使用率的渲染方式对第二区域的图像数据进行渲染。比如,在对第二区域的图像数据进行渲染之前,可先获取对第二区域的图像数据进行渲染所需的目标素材;然后可判断素材库中是否存在与所述目标素材的类型相同但较低质量的参考素材,所述参考素材的材质质量低于所述目标素材的材质质量。若存在,则使用较低质量的参考素材完成所述第二区域的图像数据的渲染。
在一个实施例中,所述S303可包括:基于第二渲染规则所指示的分辨率和影像质量参数对第二区域的图像数据进行渲染,得到第二子图像。在一个实施例中,第二渲染规则所指示的分辨率低于在第一区域的渲染阶段对第一区域图像数据进行渲染的分辨率,第二渲染规则所指示的影像质量参数设置为可以降低效能使用率的影像质量参数。在一个实施例中,影像质量参数可指渲染分辨率和/或渲染素材质量,第二渲染规则所指示的降低效能使用率的影像质量参数可以为较低的渲染分辨率和较低质量的渲染素材。可以设置一个标准分辨率,在正常处理情况下,基于标准分辨率对初始图像进行渲染,如果检测到需要分区域进行渲染,例如检测到开启了分区域渲染功能,则以等于或者高于所述标准分辨率的分辨率来对第一区域进行渲染,以低于所述标准分辨率的分辨率来对第二区域进行渲染。
可以理解的,在第二区域的渲染阶段利用第二渲染规则所指示的分辨率和影 像质量参数进行渲染,可以通过节省第二区域渲染阶段的效能,来弥补第一区域渲染阶段的部分效能损失,从而产出一个既可以顾及到景深影像质量提升,又能够避免额外消耗过多运算效能的高质量图像效果。也就是说,需要分区域渲染时,对第一区域进行的渲染对图像处理设备的软硬件资源的消耗增加,但对第二区域进行的渲染对图像处理设备的软硬件资源的消耗减小,这样一来对图像处理设备的资源消耗并不会明显增加,甚至根据第一渲染规则和第二渲染规则中的分辨率(例如第一区域的分辨率为上述提及的标准分辨率)和影像质量参数等需求,使得在分区域渲染时,对图像处理设备的资源消耗变少,从而进一步确保刷新率。
S304,根据第一子图像和第二子图像生成目标展示图像。
在一些实施例中,在图3A所示的图像渲染方法中,分别对第一区域和第二区域进行渲染,得到了第一子图像和第二子图像之后,图像处理设备可以根据第一子图像和第二子图像生成目标展示图像,并可以将目标展示图像展示在头戴式显示装置的当前场景内。得到的目标展示图像由两部分组成,目标用户的眼睛注视部分的图像是具有视觉景深效果的清晰图像(如图5中A区域所示),该清晰图像区域是基于第一区域子图像和第二区域子图像叠加得到,非注视部分的图像是非视觉景深效果的模糊图像(如图5中B区域所示),该模糊图像区域基于第二子图像得出,如此可以在保证图像渲染的运算效率的同时,有效的提升了最终展示的VR图像的景深效果,可实现更沉浸的体验。在一个简单的实施例中,可以根据第一子图像对应的第一区域在初始图像上的区域位置,直接将第一子图像叠加到第二子图像上,覆盖第二子图像上的相应区域位置的图像内容,得到目标展示图像。
在一个实施例中,为了产生如图5所示的目标展示图像,图像处理设备在对第一区域渲染时需要使用比第二区域渲染时更高的渲染分辨率。对第一区域渲染时使用的渲染分辨率和对第二区域渲染时使用的渲染分辨率的变化值可以依据图像处理设备的性能决定。若图像处理设备性能较好,可设置两个渲染分辨率使得两个渲染分辨率之间的变化值较小,也就是说,第一区域渲染所使用的分辨率和第二区域渲染所使用的分辨率都相对较高;若图像处理设备性能一般,可设置两个渲染分辨率使得两个渲染分辨率之间的变化值较大,也就是说,第二区域渲染所使用的分辨率相对较低。
由于对第一区域和第二区域渲染时的渲染分辨率不同,所以渲染得到的第一子图像和第二子图像的图像分辨率也不相同,因此,S304在根据第一子图像和第二子图像生成目标展示图像时,为了避免因两个子图像的分辨率不同造成融合时第一子图像边缘的剧烈视觉化落差现象,引入了遮罩图层,根据遮罩图层、第一子图像和第二子图像生成目标展示图像。
在一个实施例中,图3D为本申请实施例的步骤S304中根据第一子图像和第二子图像生成目标展示图像的流程图。如图3D所示,步骤S304可以包括以下步骤:
S331,生成遮罩图层;
S332,根据第一子图像在初始图像中的位置,将第一子图像、第二子图像以及遮罩图层进行图层叠加处理,生成目标展示图像,目标展示图像包括注视区域和非注视区域;其中,注视区域为对所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理而形成的重叠区域,注视区域中的像素点的颜色值是基于第一子图像、第二子图像、以及遮罩图层中与重叠区域对应的区域中的像素 点的颜色值计算得到的;非注视区域中的像素点的颜色值是根据第二子图像的像素点颜色值确定的。
可以理解的,在图像融合时引入遮罩图层,在本申请实施例中,遮罩图层的作用在于,使得在最终得到的目标展示图像上,越接近于目标展示图像中注视点的位置,像素点的颜色值越接近第一子图像的颜色值,越远离目标展示图像中注视点的位置,像素点的颜色值越接近于第二子图像的颜色值,从而产生一个平滑过渡的效果。
在一个实施例中,遮罩图层的尺寸可以依据第一子图像的尺寸确定,为了达到第一子图像和第二子图像自然结合的效果,遮罩图层的尺寸应该等于或者大于第一子图像的尺寸。在一个实施例中,遮罩图层的形状可以与第一子图像的形状相同,比如第一子图像为圆形,遮罩图层也可以为圆形;第一子图像为正方形,遮罩图层也可以为正方形。再一个实施例中,遮罩图层的形状也可以与第一子图像的形状不同,图像处理设备可根据当前场景的需要选择合适的遮罩图层的尺寸和形状。
在确定了遮罩图层的尺寸和形状之后,生成遮罩图层,再计算遮罩图层中各个像素点的遮罩数值。在一个实施例中,计算遮罩图层中目标像素点的遮罩数值的方法可以是:在遮罩图层内确定一个圆形区域,获取该圆形区域的半径值,记作R;确定目标像素点和圆形区域的圆心之间的距离,记作r;利用公式M=1-(r/R)计算目标像素点的遮罩数值,其中,M表示遮罩图层中目标像素点的遮罩数值。可以理解的,利用与目标像素点的遮罩数值相同的计算方法可以计算得到遮罩图层中非目标像素点的遮罩数值,其中,上述遮罩图层中目标像素点为遮罩图层中当前计算的像素点。
生成遮罩图层之后,图像处理设备根据第一子图像在初始图像中的位置,将第一子图像、第二子图像以及遮罩图层进行图层叠加处理,生成目标展示图像,所述目标展示图像由注视区域和非注视区域两部分组成,注视区域对应于第一子图像、第二子图像以及遮罩图层重叠的区域,非注视区域指第二子图像上除去重叠区域的剩余区域。换句话说,目标展示图像是由第一子图像、第二子图像和遮罩图层三者叠加融合得到的。
在一个实施例中,由于第二子图像的大小和初始图像大小相同,第一子图像在初始图像中的位置也可以理解为第一子图像在第二子图像中的的叠加位置,因此,图3E示出了本申请实施例的步骤S332的具体流程图。如图3E所示,S332包括以下步骤:
S341,确定第一子图像在第二子图像中的叠加区域;
S342,在第二子图像的叠加区域上叠加第一子图像和遮罩图层,形成重叠区域,也即目标展示图像的注视区域;
S343,计算目标展示图像的注视区域上各个像素点的颜色值;
S344,将第二子图像中除去注视区域的剩余区域确定为目标展示图像的非注视区域;
S345,计算非注视区域中各个像素点的颜色值;
S346,按照注视区域中各个像素点的颜色值和非注视区域中各个像素点的颜色值对目标展示图像的两个区域进行渲染,即可得到目标展示图像。
例如,图6A为一种目标展示图像生成方法的示意图,如图6A所示,601表示遮罩图层,602表示第一子图像,603表示第二子图像,604表示目标展示 图像。假设确定第一子图像602在第二子图像603中重叠区域为603a虚线框部分,则将603a作为目标展示图像上的注视区域;第二子图像603中阴影部分603b即作为目标展示图像的非注视部分,再通过计算注视区域和非注视区域中各个像素点的颜色值,得到目标展示图像。
在一个实施例中,计算目标展示图像的注视区域上各个像素点的颜色值的方式可为:利用公式B=I*M+O*(1-M)来计算目标展示图像的注视区域中的像素点的颜色值。其中,B表示注视区域中的目标像素点的颜色值,I表示所述第一子图像上与所述目标像素点具有相同图像位置的像素点的颜色值,O表示所述第二子图像上与所述目标像素点具有相同图像位置的像素点的颜色值,M表示所述遮罩图层上与所述目标像素点具有相同图像位置的像素点的遮罩数值,其中,上述注视区域中的目标像素点为注视区域中的当前计算的像素点。
例如,基于图6A中目标展示图像的生成方法,图6B为一种注视区域中像素点的颜色值计算方法的示意图,假设B点为目标展示图像中注视区域上的目标像素点,若想要确定目标像素点B的颜色值,首先分别在遮罩图层601、第一子图像602以及第二子图像603上找到与目标像素点具有相同图像位置的像素点,分别为点6011、点6021和点6031;再获取点6011、点6021和点6031的颜色值,分别记作M、I和O;最后将M、I和O代入公式B=I*M+O*(1-M)便可得到注视区域中目标像素点B的颜色值。
在一个实施例中,目标展示图像的非注视区域是指第二子图像上除去重叠区域的剩余区域,即目标展示图像的非注视区域是属于第二子图像上的一部分区域,因此计算非注视区域中各个像素点的颜色值的的方式可以是:将落入非注视区域中的第二子图像上各个像素点的颜色值作为各个像素点在非注视区域中的颜色值。
在一个实施例中,S304根据第一子图像和第二子图像生成目标展示图像的方式还可以为:将第一子图像叠加到第二子图像中对应位置,也即用第一子图像覆盖第二子图像中与第一区域对应位置处的图像,从而展示给用户的目标图像即为在人眼聚焦部分是清晰的具有视觉景深效果的图像,在非聚焦部分是模糊的非视觉景深效果的图像。
在一个实施例中,图3F为本申请实施例的步骤S304中根据第一子图像和第二子图像生成目标展示图像的具体流程图。如图3F所示,所述步骤S304可包括以下步骤:
S351,根据第二子图像和第一子图像确定混合区域;
S352,根据混合区域中各个像素点距图像中心的距离,确定混合区域中各个像素点的颜色值;
S353,基于混合区域中各个像素点的颜色值、所述第二区域中各个像素点的颜色值和所述第一区域中各个像素点的颜色值生成目标展示图像。
在一个实施例中,第一子图像包括边缘区域和核心区域,其中,核心区域是根据注视点和目标FOV确定的,边缘区域可以理解为将核心区域扩大一定尺寸比如10%得到的,该尺寸可以根据不用的应用场景设置为不同值,如图7A所示的第一区域711,阴影部分为边缘区域712,非阴影部分为核心区域703。可以理解的,图7A只是第一区域中边缘区域和核心区域的示意图,实际上边缘区域远小于核心区域。
在一个实施例中,上述第一子图像包括边缘区域和核心区域,图3G为本申 请实施例的步骤S351中所述根据第二子图像和第一子图像确定混合区域的具体流程图,如图3G所示,步骤S351包括以下步骤:
S361,在所述第二子图像中确定参考区域,所述参考区域覆盖所述第一子图像在所述第二子图像中的对应区域;
S362,将所述参考区域中除去所述核心区域的部分确定为所述混合区域。
在一些实施例中,在确定混合区域时,首先确定第一子图像叠加在第二子图像中的位置;然后在第二子图像中确定一个参考区域,该参考区域能覆盖上述第一子图像在第二子图像中的位置;再从参考区域中除去第一子图像中的核心区域,将剩下的第一子图像与参考区域重叠的部分确定为混合区域。
在一个实施例中,参考区域的大小可以大于第一子图像,此时参考区域在覆盖了第一子图像在所述第二子图像中的对应区域外,还覆盖一部分第二子图像。例如,图7B为确定混合区域的一种方法,区域701表示第一子图像,区域702表示第二子图像,区域703表示第一子图像中的核心区域,区域704表示参考区域,阴影区域705混合区域。
再一个实施例中,参考区域的大小可以等于第一子图像的大小,此时参考区域刚好可以覆盖第一子图像在所述第二子图像中的对应区域。例如,图8为确定混合区域的另一种方法,区域801既表示第一子图像,又表示参考区域,区域802表示第一子图像中的核心区域,区域803表示第二区域子图像,阴影区域804表示混合区域。
确定了混合区域之后,图像处理设备再确定混合区域中各个像素点的颜色值。作为一种可行的实施方式,图3H为本申请实施例的步骤S352中根据混合区域中各个像素点距离图像中心的距离,确定各个像素点的颜色值的具体流程图。如图3H所示,所述步骤S352可包括以下步骤:
S371,根据参考区域的半径和混合区域中目标像素点距图像中心的距离,确定目标像素点的参考颜色值,混合区域中目标像素点是混合区域中的正在计算的像素点;
S372,获取第一子图像中与混合区域中每个所述目标像素点具有相同图像位置的像素点的颜色值,并获取第二子图像中与混合区域中每个所述目标像素点具有相同图像位置的像素点的颜色值;
S373,将上述的各个颜色值代入预设公式进行计算,得到的计算结果即为混合区域中每个目标像素点的颜色值,也即,根据第一子图像中与所述混合区域中每个所述目标像素点具有相同图像位置的像素点的颜色值、第二子图像中与混合区域中每个所述目标像素点具有相同图像位置的像素点的颜色值,计算得到所述混合区域中每个目标像素点的颜色值。
同理的,对于混合区域中除目标像素点外的其他各个像素点,按照与确定目标像素点颜色值相同的方法确定混合区域中除目标像素之外的其他各个像素点的颜色值。
在一个实施例中,图3I为本申请实施例的步骤S353的具体流程图。如图3I所示,所述步骤S353可包括以下步骤:
S381,按照混合区域中各个像素点的颜色值对混合区域进行渲染;
S382,按照核心区域中各个像素点在第一子图像中的颜色值对核心区域进行渲染,
S383,对于第二子图像中除去核心区域和混合区域的剩余部分,按照剩余部 分中各个像素点在第二子图像中的颜色值进行渲染,最终得到目标展示图像。
综上所述,本申请实施例针对S304根据第一子图像和第二子图像生成目标展示图像,提供了上述两种实现方式,第一种:引入遮罩层,计算遮罩层、第一子图像和第二子图像的中重叠区域和非重叠区域的的中各个像素点的颜色值,从而得到目标展示图像;第二种:引入混合区域,根据混合区域、第一子图像和第二子图像中各个像素的颜色值,生成目标展示图像。
在根据第一子图像和第二子图像生成目标展示图像时,可依据第一子图像的分辨率和第二子图像的分辨率从上述两种方式中选择合适的目标展示图像生成方法。在一个实施例中,如果对第一子图像和第二子图像进行渲染时使用的两个分辨率的差值大于或等于预设值,则可选用第一种方法生成目标展示图像,比如渲染第二子图像的分辨率为200ppi(Pixels Per Inch,像素每英寸),渲染第二子图像的分辨率为160ppi,预设差值为20,则可选用第一种方法生成目标展示图像;如果对第一子图像和第二子图像进行渲染时使用的两个分辨率的差值小于预设值,则可选用第二种方法生成目标展示图像。
参见图9A,是本申请实施例的对第一区域的图像渲染方法的流程示意图,本申请实施例的所述方法可以对应于上述的S302。同样本申请实施例的所述方法也可以由VR主机、VR眼镜等图像处理设备来执行,如图9A所示,该方法包括以下步骤:
S901,从所述初始图像中获取所述第一区域的彩色图像数据,并获取所述第一区域的深度图像数据。
在一个实施例中,图像处理设备可根据彩色图像数据生成参考图层集合,参考图层集合包括多个参考图层,参考图层具有相同的图像尺寸,参考图层之间的分辨率不相同且小于所述彩色图像数据所对应的图像分辨率。在一个实施例中,参考图层可以是将彩色图像数据按照预设规则进行降分辨率处理后,再按照所述彩色图像数据的图像尺寸进行尺寸处理后得到,例如单纯的尺寸放大处理。
可以理解的,如果对彩色图像数据按照预设规则进行降分辨率处理后,得到的各个参考图层的尺寸等于彩色图像数据的图像尺寸,则可不对参考图层的尺寸进行处理;如果对彩色图像数据按照预设规则进行降分辨率处理后,得到的各个参考图层的尺寸小于彩色图像数据的图像尺寸,则可对各个参考图层进行尺寸放大处理,以使得各个参考图层的尺寸与彩色图像数据的图像尺寸相同。也就是说,各个参考图层可以理解为与第一区域的彩色图像数据具有相同尺寸,但不同分辨率的图像。举例来说,假设第一区域的彩色图像数据的分辨率为600x600,可按照预设规则生成分辨率为300x300、150x150、75x75以及30x30等多个模糊程度不一但图像尺寸相同的参考图层。
S902,基于所述深度图像数据和所述第一渲染规则,对所述彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像。
在一个实施例中,获取到第一区域的彩色图像数据和深度图像数据之后,在S902中基于深度图像数据和第一渲染规则,对彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像,其中,在图10所对应的实施例中描述了视觉景深渲染的处理过程。
S903,将得到的渲染图像作为第一子图像。
在一个实施例中,在S902中可以是基于深度图像数据反映的深度信息和第一渲染规则对彩色图像数据进行渲染。在一个实施例中,基于深度图像数据反映 的深度信息和第一渲染规则对彩色图像数据进行渲染具体可指:根据深度图像数据反映的深度信息,为彩色图像数据中各个像素点从参考图层中选择目标图层,以确定需要生成的第一子图像上的各个像素点的颜色值,完成对各个像素点的渲染,从而也就完成了对彩色图像数据的渲染。其中,参考图层是根据彩色图像数据生成的,各个参考图层具有相同的图像尺寸,参考图层之间的分辨率不相同且小于所述彩色图像数据所对应的图像分辨率。
图9B为本申请实施例的步骤S902的具体流程图。如图9B所示,步骤S902包括以下步骤:
S911,从彩色图像数据中确定出注视点像素;
S912,根据深度图像数据确定注视点像素的深度信息,并依据注视点像素的深度信息确定关于目标用户的目标焦距;
S913,根据目标焦距和深度图像数据确定彩色图像数据中非注视点像素的参考焦距;
S914,根据目标焦距和参考焦距对彩色图像数据进行视觉景深渲染,得到具有视觉景深效果的渲染图像。
在一些实施例中,深度图像数据上的像素值反映了当前场景内图像上各个像素点所对应的场景对象表面点到目标用户人眼的距离,也即深度信息。因此,注视点像素的深度信息即可作为目标用户的目标焦距。同理,彩色图像数据中非注视点像素的参考焦距则可以对应为:各个非注视点像素在深度图像数据中的像素值。
在一个实施例中,参考图11为本申请实施例的步骤S914中图像处理设备根据目标焦距和参考焦距对彩色图像进行视觉渲染的流程示意图,如图11所示,步骤S914包括以下步骤:
S1101,确定非注视点像素中的目标像素的参考焦距与目标焦距之间的差异信息其中,上述非注视区域中的目标像素点为非注视区域中的当前计算的像素点;
S1102,根据差异信息确定目标像素的映射值,并基于目标像素的映射值从参考图层集合中查找目标图层;
S1103,根据目标图层上与目标像素具有相同图像位置的像素的颜色值,确定目标像素的颜色值。
在一个实施例中,映射值可以指CoC值,映射值可以为一个0到1之间的任意数,映射值的大小可反映非注视点像素中的目标像素与注视点像素之间的距离,映射值越大表示目标像素与注视点像素之间的距离越远,目标像素与注视点像素之间的距离越远则说明目标像素的图像越模糊。因此,映射值越大,目标像素的清晰度越低。
在一个实施例中,图像处理设备在执行S1102基于目标像素的映射值从参考图层集合中查找目标图层时可以包括:预先设置至少一组参考图层与映射值的对应关系,然后根据目标像素的映射值从预先设置的对应关系中查找到与目标像素的映射值对应的目标图层。假设第一区域的彩色图像数据的分辨率为600x600,按照预设规则生成分辨率为300x300、150x150以及75x75等多个参考图层,假设分辨率为300x300的参考图层对应的映射值可以为0.2;分辨率150x150的参考图层对应的映射值可以为0.5;分辨率为75x75的参考图层对应的映射值可以为0.8。基于上述假设的参考图层与映射值的对应关系,如果目标像素的映射值为0.5,则选择分辨率为150x150的参考图层作为目标图层。需要说明的是,参 考图层与映射值的对应关系的设置要遵循映射值越大,参考图层的分辨率越低的对应规则。
在一个实施例中,步骤S1102中目标像素可以为非注视点像素中的任一个像素,差异信息中可包括目标像素的参考焦距与目标焦距的焦距差,比如目标焦距为f,目标像素的参考焦距为f 0,该目标像素的参考焦距与目标焦距的焦距差为f 0-f。在一个实施例中,步骤S1102根据差异信息确定目标像素的映射值的方式可以为:根据差异信息中的焦距差确定目标像素的映射值,具体的方式可以为:预先确定至少一组焦距差与映射值的对应关系,在获取到目标像素的参考焦距与目标焦距之间的焦距差之后,在上述的对应的关系中查找与该焦距差对应的映射值作为目标像素的映射值。
在其他实施例中,在所述S1102中差异信息中还可以包括映射值差值,该映射值差值可指目标像素的参考焦距对应的映射值与目标焦距对应的映射值之间的差值。在一些实施例中,预先设置一组焦距(包括目标焦距和参考焦距)与映射值的对应关系;当确定了注视点像素的目标焦距之后,根据上述的预设关系查找到目标焦距对应的映射值;当确定了目标像素的参考焦距之后,根据上述的预设关系查找到该参考焦距对应的映射值;进一步的,根据目标焦距对应的映射值和目标像素的参考焦距对应的映射值确定映射值差值。在一实施例中,步骤S802根据差异信息确定目标像素的映射值的方式可以为根据差异信息中的映射值差值确定目标像素的映射值,具体的方式可以为将差异信息中的映射值差值作为目标像素的映射值。
在一个实施例中,步骤S1102中所提及的参考图层集合的生成过程可以为:对第一区域的彩色图像数据按照预设规则生成多个不同分辨率但相同尺寸的参考图层,然后将多个参考图像组成参考图层集合。其中,参考图层中各个参考图层的分辨率不相同,且均不大于彩色图像数据的分辨率,各个参考图层的尺寸均相同,且均等于彩色图像数据的尺寸。
在一个实施例中,当步骤S1102中所提及的目标图层的数量为一个时,图像处理设备在执行S1103时具体可以包括:将目标图层上与目标像素的具有相同图像位置的像素的颜色值作为目标像素的颜色值。在另一个实施例中,当步骤S1102中所提及的目标图层的数量为至少两个时,图像处理设备在执行S1103时具体可以包括:分别获取至少两个目标图层中与目标像素具有相同图像位置的像素的颜色值,得到至少两个颜色值;按照预设运算规则对至少两个颜色值进行计算,将计算得到的数值作为目标像素的颜色值。在一些实施例中,预设运算规则可以为平均值计算、或者加权平均运算,或者也可以为其他的运算,本申请实施例中不做具体限定。
也就是说,在执行S1103之前,可先确定在S1102中基于目标像素的映射值从参考图层集合中查找到的目标图层的数量,然后基于目标图层的数量并根据目标图层上与目标像素具有相同图像位置的像素的颜色值确定目标像素的颜色值。
在一个实施例中,如果目标图层的数量为一个时,将该目标图层上与目标像素具有相同图像位置的像素的颜色值确定为目标像素的颜色值。参考图12A,为目标图层为一个时确定目标像素的颜色值的方法流程图,在图12A中可假设区域A表示第一区域,F表示第一区域中的注视点像素,B表示第一区域中非注视点像素中的目标像素。假设获取到的第一区域的彩色图像数据为分辨率为600x600的图像,并假设根据目标像素的参考焦距与目标焦距之间的差异信息确定的目标 像素B的映射值表示为CoC B(CoC B为0-1之间的任意的数)。进一步的,假设对600x600的图像按照预设规则生成了4个分辨率不同的参考图层,其分辨率可分别为300x300、150x150、75x75、50x50,并根据预先设置的参考图层与映射值的对应关系查找到CoC B对应的目标图层为参考图层集合中分辨率为75x75的图层。那么基于CoC B可以找到目标图层上与目标像素B具有相同图像位置的像素,即B';将获取到的B'处的颜色值作为目标像素B的颜色值。
作为另一种可行的实施方式,如果目标图层的数量为两个或两个以上时,可分别获取至少两个目标图层中与目标像素具有相同图像位置的像素的颜色值,根据预设的运算规则对两个颜色值进行计算,将计算得到的数值作为目标像素的颜色值。假设基于图12A所做的假设,如果根据预先设置的参考图层与映射值的对应关系查找到CoC B对应的目标图层为参考图层集合中分辨率为75x75的参考图层和分辨率为50x50的参考图层,如图12B所示,则分别找到75x75的图层上与目标像素具有相同图像位置的像素B'和50x50的图层上与目标像素具有相同图像位置的像素B”;获取B'和B”的颜色值,并对两个颜色值进行加权平均运算,将运算得到的数值作为目标像素B的颜色值。
综上所述,在图9A所示的渲染方法中,基于深度图像数据和第一渲染规则对彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像,首先要确定出目标用户的眼睛注视点位置,此注视点位置是眼睛聚焦的部分也是渲染得到的图像中最清晰的部分,透过预先产出的第一区域的深度图像数据可查询注视点的深度信息进而推算出目标焦距。对于第一区域中非注视点像素的颜色值就可以透过各个非注视点像素的深度信息与目标焦距的差异而推算出各个非注视点像素的映射值,紧接着利用各个非注视点像素的映射值作为参考依据,查询各个非注视点像素对应的参考图层,进而确定各个非注视点像素的颜色值。比如针对非注视点像素中的目标像素,如果目标像素的映射值越大,则查询预先产出的分辨率越低的参考图层,并将参考图层上与目标像素的具有相同图像位置的像素的颜色值作为目标像素的颜色值。透过上述过程即可得到具有视觉景深效果的渲染图像。
本申请实施例还提供了一种图像渲染方法,由图像处理设备执行,图13A是本申请实施例提供的另一种图像渲染方法的流程图。如图13A所示,包括以下步骤:
步骤S1301,获取当前场景的初始图像,并确定所述初始图像上的第一区域和第二区域;
步骤S1302,基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像;
步骤S1303,基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像;
步骤S1304,生成遮罩图层,所述遮罩图层的大小为第二区域中除去第一区域的剩余区域。
在一个实施例中,遮罩图层的尺寸可以依据剩余区域的尺寸确定,为了达到第一子图像和第二子图像自然结合的效果,遮罩图层的尺寸还可以等于或者大于剩余区域的尺寸。
在一个实施例中,遮罩图层的形状可以为第二区域中除去第一区域的剩余区域的形状,比如圆环型,或者“回”字型等。
步骤S1305,根据所述第一子图像、所述第二子图像以及所述遮罩图层生成 目标展示图像;其中,所述第一渲染规则和所述第二渲染规则不相同。
在一些实施例中,根据所述第一子图像在所述初始图像中的位置以及所述遮罩图层的大小,将所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理,生成目标展示图像,所述目标展示图像包括注视区域和非注视区域;其中,所述注视区域中的像素点的颜色值是基于所述第一子图像的像素点的颜色值计算得到的;所述非注视区域为所述第二区域与所述遮罩图层叠加处理后形成的重叠区域,所述非注视区域的像素点的颜色值是根据所述第二子图像和所述遮罩图层的像素点颜色值确定的。
在一些实施例中,确定所述第一子图像对应的注视区域;计算所述目标展示图像的注视区域上各个像素点的颜色值;将所述第二子图像上叠加所述遮罩图层,将叠加后的第二子图像中除去注视区域的剩余区域确定为目标展示图像的非注视区域;计算所述非注视区域中各个像素点的颜色值;按照所述注视区域中各个像素点的颜色值和所述非注视区域中各个像素点的颜色值对所述注视区域和所述非注视区域进行渲染,得到所述目标展示图像。
在一些实施例中,在确定了遮罩图层的尺寸和形状之后,生成遮罩图层,再计算遮罩图层中各个像素点的遮罩数值。在一个实施例中,计算遮罩图层中目标像素点的遮罩数值的方法可以是:。不失一般性的假设,当根据遮罩图层确定一个圆环时,获取该圆环的外环半径值,记作R,圆环的内环半径值,记作r;确定目标像素点和圆形区域的圆心之间的距离,记作l,R-r<=l<=R;利用公式M=1-(l/R)计算目标像素点的遮罩数值,其中,M表示遮罩图层中目标像素点的遮罩数值。可以理解的,利用与目标像素点的遮罩数值相同的计算方法可以计算得到遮罩图层中非目标像素点的遮罩数值,其中,上述遮罩图层中目标像素点为遮罩图层中当前计算的像素点。
图13B是本申请本申请实施例提供的另一种目标展示图像生成方法示意图。如图13B所示,生成环形遮罩图层1301,其内环区域大小与第一子图像1302大小一致,该圆环的外环半径值为R,圆环的内环半径值为r。第二子图像1303与遮罩图层1301叠加形成叠加区域1303a和非叠加区域1303b。第一子图像1302对应的区域为注视区域1304a,将叠加后的第二子图像1303中除去注视区域1304a的剩余区域确定为目标展示图像的非注视区域1304b。按照所述注视区域中各个像素点的颜色值和所述非注视区域中各个像素点的颜色值对所述注视区域和所述非注视区域进行渲染,得到目标展示图像1304。
在一个实施例中,计算目标展示图像的注视区域上各个像素点的颜色值的方式可为:利用公式B=O*M来计算目标展示图像的非注视区域中的像素点的颜色值。其中,B表示非注视区域中的目标像素点的颜色值,O表示所述第二子图像上与所述目标像素点具有相同图像位置的像素点的颜色值,M表示所述遮罩图层上与所述目标像素点具有相同图像位置的像素点的遮罩数值,其中,上述注视区域中的目标像素点为注视区域中的当前计算的像素点。可以看出,当l=R时,遮罩数值M=0,表明确定该目标像素点最模糊。当l>R时,也即针对非注视区域中遮罩图层之外的目标像素点,他们对应的遮罩数值与l=R处的像素点遮罩数值一致。
通过该技术方案,可以通过将第二区域与遮罩层叠加,实现非注视区域由内至外逐渐模糊,而注视区域的第一子图像保持清晰,进而到具有视觉景深效果的渲染图像。基于上述方法实施例的描述,在一个实施例中,本申请实施例还提供 了一种如图14所示的图像渲染装置的结构示意性框图。如图14所示,本申请实施例中的图像渲染装置,包括获取单元1401、确定单元1402、渲染单元1403以及生成单元1404,在本申请实施例中,所述图像渲染装置还可以设置在需要对图像数据进行渲染的设备中。
在一个实施例中,获取单元1401用于获取当前场景的初始图像;确定单元1402用于确定初始图像上的第一区域和第二区域;渲染单元1403用于基于第一渲染规则对初始图像中第一区域的图像数据进行渲染,得到第一子图像;所述渲染单元1403还用于基于第二渲染规则对初始图像数据中第二区域的图像数据进行渲染,得到第二子图像;生成单元1404用于根据第一子图像和第二子图像生成目标展示图像。
在一个实施例中,所述确定单元1402用于确定初始图像上的第一区域的实施方式可以为:利用人眼追踪策略对目标用户进行人眼跟踪处理,确定初始图像上的注视点;根据注视点和目标视场角FOV确定在初始图像上的第一区域。
在一个实施例中,第一渲染规则和第二渲染规则不相同,渲染单元1303在用于基于第一渲染规则对初始图像中第一区域的图像数据进行渲染,得到第一子图像时,实施方式可以为:从所述初始图像中获取所述第一区域的彩色图像数据,并获取所述第一区域的深度图像数据;基于所述深度图像数据和所述第一渲染规则,对所述彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像;将得到的渲染图像作为第一子图像。
在一个实施例中,基于所述深度图像数据和所述第一渲染规则,对所述彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像的实施方式可以为:从所述彩色图像数据中确定出注视点像素;根据所述深度图像数据确定所述注视点像素的深度信息,并依据所述注视点像素的深度信息确定关于所述目标用户的目标焦距;根据所述目标焦距和所述深度图像数据确定所述彩色图像数据中非注视点像素的参考焦距;根据所述目标焦距和所述参考焦距对所述彩色图像数据进行视觉景深渲染,得到具有视觉景深效果的渲染图像。
在一个实施例中,参考图层集合包括多个参考图层;参考图层具有相同的图像尺寸,参考图层之间的分辨率不相同且小于彩色图像数据所所对应的图像分辨率。在一个实施例中,根据所述目标焦距和所述参考焦距对所述彩色图像数据进行视觉景深渲染,得到具有视觉景深效果的渲染图像的实施方式可以为:确定所述非注视点像素中的目标像素的参考焦距与所述目标焦距之间的差异信息;根据所述差异信息确定所述目标像素的映射值,并基于所述目标像素的映射值从参考图层集合中查找目标图层;根据所述目标图层上与目标像素具有相同图像位置的像素的颜色值,确定所述目标像素的颜色值。
在一个实施例中,所述目标图层的数量为一个,根据所述目标图层上与目标像素具有相同图像位置的像素的颜色值,确定所述目标像素的颜色值的实施方式可以为:将所述目标图层上与目标像素具有相同图像位置的像素的颜色值作为所述目标像素的颜色值。再一个实施例中,所述目标图层的数量为至少两个,根据所述目标图层上与目标像素具有相同图像位置的像素的颜色值,确定所述目标像素的颜色值的实施方式可以为:分别获取至少两个目标图层中与目标像素具有相同图像位置的像素的颜色值,得到至少两个颜色值;按照预设运算规则对所述至少两个颜色值进行计算,将计算得到的数值作为所述目标像素的颜色值。
在一个实施例中,渲染单元1403在用于基于第二渲染规则对所述初始图像 中所述第二区域的图像数据进行渲染,得到第二子图像的实施方式可以为:基于所述第二渲染规则所指示的分辨率和影像质量参数对所述第二区域的图像数据进行渲染,得到第二子图像。
在一个实施例中,所述生成单元1404在用于根据所述第一子图像和所述第二子图像生成目标展示图像的具体方式为:生成遮罩图层;根据第一子图像在初始图像中的位置,将所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理,生成目标展示图像,所述目标展示图像包括注视区域和非注视区域;其中,所述注视区域为对所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理而形成的重叠区域,所述注视区域中的像素点的颜色值是基于所述第一子图像、所述第二子图像以及遮罩图层中与重叠区域内对应的部分区域中的像素点的颜色值计算得到的;所述非注视区域中的像素点的颜色值是根据所述第二子图像的像素点颜色值确定的。
在一个实施例中,在图层重叠处理时,是利用公式B=I*M+O*(1-M)计算得到所述注视区域中的像素点的颜色值;其中,B表示所述注视区域中的目标像素点的颜色值,I表示所述第一子图像上与所述目标像素点具有相同图像位置的像素点的颜色值,O表示所述第二子图像上与所述目标像素点具有相同图像位置的像素点的颜色值,M表示所述遮罩图层上与所述目标像素点具有相同图像位置的像素点的遮罩数值。
在本申请实施例中,在获取单元1401获取到当前场景的初始图像之后,确定单元1402确定出所述初始图像上的第一区域和第二区域,进一步的,渲染单元1403分别基于第一渲染规则和第二渲染规则对第一区域的图像数据和第二区域的图像数据进行渲染,得到第一子图像和第二子图像,从而生成单元1404根据第一子图像和第二子图像生成目标展示图像,实现了分区域进行针对性的图像渲染。
基于上述在另一个方法实施例的描述,图14所示的图像渲染装置获取单元1401、确定单元1402、渲染单元1403以及生成单元1404的功能还可以包括如下:
获取单元1401,用于获取当前场景的初始图像;
确定单元1402,用于确定所述初始图像上的第一区域和第二区域;
渲染单元1403,用于基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像;
渲染单元1403,用于基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像;
生成单元1404,用于生成遮罩图层,所述遮罩图层的大小为第二区域中除去第一区域的剩余区域;
生成单元1404,用于根据所述第一子图像、所述第二子图像以及所述遮罩图层生成目标展示图像;其中,所述第一渲染规则和所述第二渲染规则不相同。
在一些实施例中,生成单元1404根据所述第一子图像在所述初始图像中的位置以及所述遮罩图层的大小,将所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理,生成目标展示图像,所述目标展示图像包括注视区域和非注视区域;其中,所述注视区域中的像素点的颜色值是基于所述第一子图像的像素点的颜色值计算得到的;所述非注视区域为所述第二区域与所述遮罩图层叠加处理后形成的重叠区域,所述非注视区域的像素点的颜色值是根据所述第二 子图像和所述遮罩图层的像素点颜色值确定的。
在一些实施例中,生成单元1404确定所述第一子图像对应的注视区域;计算所述目标展示图像的注视区域上各个像素点的颜色值;将所述第二子图像上叠加所述遮罩图层,将叠加后的第二子图像中除去注视区域的剩余区域确定为目标展示图像的非注视区域;计算所述非注视区域中各个像素点的颜色值;按照所述注视区域中各个像素点的颜色值和所述非注视区域中各个像素点的颜色值对所述注视区域和所述非注视区域进行渲染,得到所述目标展示图像。
请参见图15,是本申请实施例提供的一种图像处理设备的结构示意性框图,如图15所示的图像处理设备可包括:一个或多个处理器1501和一个或多个存储器1502。上述处理器1501和存储器1502通过总线1503连接。存储器1502用于存储计算机程序,所述计算机程序包括程序指令,处理器1501用于执行所述存储器1502存储的程序指令。
所述存储器1502可以包括易失性存储器(volatile memory),如随机存取存储器(random-access memory,RAM);存储器1502也可以包括非易失性存储器(non-volatile memory),如快闪存储器(flash memory),固态硬盘(solid-state drive,SSD)等;存储器1502还可以包括上述种类的存储器的组合。
所述处理器1501可以是中央处理器CPU。所述处理器1501还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)等。该PLD可以是现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)等。所述处理器1501也可以为上述结构的组合。
本申请实施例中,所述存储器1502用于存储计算机程序,所述计算机程序包括程序指令,处理器1501用于执行存储器1502存储的程序指令,用来实现上述实施例中的相应方法的步骤。
在一个实施例中,所述处理器1501被配置调用所述程序指令用于:获取当前场景的初始图像;确定初始图像上的第一区域和第二区域;基于第一渲染规则对初始图像中第一区域的图像数据进行渲染,得到第一子图像;基于第二渲染规则对初始图像数据中第二区域的图像数据进行渲染,得到第二子图像;根据第一子图像和第二子图像生成目标展示图像。
在一个实施例中,所述处理器1501在用于确定初始图像上的第一区域的实施方式可以为:利用人眼追踪策略对目标用户进行人眼跟踪处理,确定初始图像上的注视点;根据注视点和目标视场角FOV确定在初始图像上的第一区域。
在一个实施例中,第一渲染规则和第二渲染规则不相同,所述处理器1501在用于基于第一渲染规则对初始图像中第一区域的图像数据进行渲染,得到第一子图像时,实施方式可以为:从所述初始图像中获取所述第一区域的彩色图像数据,并获取所述第一区域的深度图像数据;基于所述深度图像数据和所述第一渲染规则,对所述彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像;将得到的渲染图像作为第一子图像。
在一个实施例中,处理器1501在用于基于所述深度图像数据和所述第一渲染规则,对所述彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像的实施方式可以为:从所述彩色图像数据中确定出注视点像素;根据所述深度图像数据确定所述注视点像素的深度信息,并依据所述注视点像素的深度信息确定关于 所述目标用户的目标焦距;根据所述目标焦距和所述深度图像数据确定所述彩色图像数据中非注视点像素的参考焦距;根据所述目标焦距和所述参考焦距对所述彩色图像数据进行视觉景深渲染,得到具有视觉景深效果的渲染图像。
在一个实施例中,参考图层集合包括多个参考图层;参考图层具有相同的图像尺寸,参考图层之间的分辨率不相同且小于彩色图像数据所所对应的图像分辨率。在一个实施例中,处理器1501在用于根据所述目标焦距和所述参考焦距对所述彩色图像数据进行视觉景深渲染,得到具有视觉景深效果的渲染图像的实施方式可以为:确定所述非注视点像素中的目标像素的参考焦距与所述目标焦距之间的差异信息;根据所述差异信息确定所述目标像素的映射值,并基于所述目标像素的映射值从参考图层集合中查找目标图层;根据所述目标图层上与目标像素具有相同图像位置的像素的颜色值,确定所述目标像素的颜色值。
在一个实施例中,所述目标图层的数量为一个,所述处理器1501在用于根据所述目标图层上与目标像素具有相同图像位置的像素的颜色值,确定所述目标像素的颜色值的实施方式可以为:将所述目标图层上与目标像素具有相同图像位置的像素的颜色值作为所述目标像素的颜色值。再一个实施例中,所述目标图层的数量为至少两个,所述处理器1501在用于根据所述目标图层上与目标像素具有相同图像位置的像素的颜色值,确定所述目标像素的颜色值的实施方式可以为:分别获取至少两个目标图层中与目标像素具有相同图像位置的像素的颜色值,得到至少两个颜色值;按照预设运算规则对所述至少两个颜色值进行计算,将计算得到的数值作为所述目标像素的颜色值。
在一个实施例中,所述处理器1501在用于基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像的实施方式可以为:基于所述第二渲染规则所指示的分辨率和影像质量参数对所述第二区域的图像数据进行渲染,得到第二子图像。
在一个实施例中,所述处理器1501在用于根据所述第一子图像和所述第二子图像生成目标展示图像的具体方式为:生成遮罩图层;根据第一子图像在初始图像中的位置,将所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理,生成目标展示图像,所述目标展示图像包括注视区域和非注视区域;其中,所述注视区域为对所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理而形成的重叠区域,所述注视区域中的像素点的颜色值是基于所述第一子图像、所述第二子图像以及遮罩图层中与重叠区域内对应的部分区域中的像素点的颜色值计算得到的;所述非注视区域中的像素点的颜色值是根据所述第二子图像的像素点颜色值确定的。
在一个实施例中,在图层重叠处理时,是利用公式B=I*M+O*(1-M)计算得到所述注视区域中的像素点的颜色值;其中,B表示所述注视区域中的目标像素点的颜色值,I表示所述第一子图像上与所述目标像素点具有相同图像位置的像素点的颜色值,O表示所述第二子图像上与所述目标像素点具有相同图像位置的像素点的颜色值,M表示所述遮罩图层上与所述目标像素点具有相同图像位置的像素点的遮罩数值。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM) 或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本申请部分实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (23)

  1. 一种图像渲染方法,由图像处理设备执行,包括:
    获取当前场景的初始图像,并确定所述初始图像上的第一区域和第二区域;
    基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像;
    基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像;
    根据所述第一子图像和所述第二子图像生成目标展示图像;
    其中,所述第一渲染规则和所述第二渲染规则不相同。
  2. 如权利要求1所述的方法,其中,所述确定所述初始图像上的第一区域和第二区域,包括:
    利用人眼追踪策略对目标用户进行人眼跟踪处理,确定所述初始图像上的注视点;
    根据所述注视点和目标视场角,确定所述初始图像上的第一区域;
    根据所述初始图像和所述第一区域,确定出所述初始图像上的第二区域。
  3. 如权利要求1所述的方法,其中,所述基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像,包括:
    从所述初始图像中获取所述第一区域的彩色图像数据,并获取所述第一区域的深度图像数据;所述彩色图像数据为从所述初始图像中截取的所述第一区域的图像数据,所述深度图像数据反映所述初始图像中的场景对象到人眼之间的距离信息;
    基于所述深度图像数据和所述第一渲染规则,对所述彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像;
    将得到的所述渲染图像作为第一子图像。
  4. 如权利要求3所述的方法,其中,所述基于所述深度图像数据和所述第一渲染规则对所述彩色图像数据进行渲染,得到具有视觉景深效果的渲染图像,包括;
    从所述彩色图像数据中确定出注视点像素;
    根据所述深度图像数据确定所述注视点像素对应的深度信息,并依据所述注视点像素对应的深度信息确定关于所述目标用户的目标焦距;
    根据所述目标焦距和所述深度图像数据确定所述彩色图像数据中非注视点像素的参考焦距;
    根据所述目标焦距和所述参考焦距对所述彩色图像数据进行视觉景深渲染,得到具有视觉景深效果的渲染图像。
  5. 如权利要求4所述的方法,其中,所述根据所述目标焦距和所述参考焦距对所述彩色图像数据进行视觉景深渲染,得到具有视觉景深效果的渲染图像,包括:
    确定所述非注视点像素中的每个目标像素的参考焦距与所述目标焦距之间的差异信息;
    根据所述差异信息确定每个所述目标像素的映射值,并基于每个所述目标像素的映射值从参考图层集合中查找目标图层;
    根据所述目标图层上与每个所述目标像素具有相同图像位置的像素的颜色 值,确定每个所述目标像素的颜色值。
  6. 如权利要求5所述的方法,其中,所述参考图层集合包括:多个参考图层;所述多个参考图层具有相同的图像尺寸,所述多个参考图层的分辨率不相同且小于所述彩色图像数据所对应的分辨率。
  7. 如权利要求5所述的方法,其中,所述目标图层的数量为一个,所述根据所述目标图层上与所述每个目标像素具有相同图像位置的像素的颜色值确定每个所述目标像素的颜色值,包括:将所述目标图层上与所述目标像素具有相同图像位置的像素的颜色值作为所述目标像素的颜色值。
  8. 如权利要求5所述的方法,其中,所述目标图层的数量为至少两个,所述根据所述目标图层上与每个所述目标像素具有相同图像位置的像素的颜色值确定每个所述目标像素的颜色值,包括:
    分别在至少两个目标图层中获取与所述目标像素具有相同图像位置的像素的颜色值,得到至少两个颜色值;
    按照预设运算规则对所述至少两个颜色值进行计算,将计算得到的数值作为所述目标像素的颜色值。
  9. 如权利要求1所述的方法,其中,所述基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像,包括:
    基于所述第二渲染规则所指示的分辨率和影像质量参数对所述第二区域的图像数据进行渲染,得到第二子图像。
  10. 如权利要求1所述的方法,其中,所述根据所述第一子图像和所述第二子图像生成目标展示图像,包括:
    生成遮罩图层;
    根据所述第一子图像在所述初始图像中的位置,将所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理,生成目标展示图像,所述目标展示图像包括注视区域和非注视区域;
    其中,所述注视区域为对所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理而形成的重叠区域,所述注视区域中的像素点的颜色值是基于所述第一子图像、所述第二子图像、以及所述遮罩图层中与所述重叠区域对应的区域中的像素点的颜色值计算得到的;
    所述非注视区域中的像素点的颜色值是根据所述第二子图像的像素点颜色值确定的。
  11. 如权利要求10所述的方法,其中,所述根据所述第一子图像在所述初始图像中的位置,将所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理,生成目标展示图像,包括:
    确定所述第一子图像在所述第二子图像中的叠加区域;
    在所述第二子图像的叠加区域上叠加所述第一子图像和所述遮罩图层,形成重叠区域,所述重叠区域为所述目标展示图像的注视区域;
    计算所述目标展示图像的注视区域上各个像素点的颜色值;
    将所述第二子图像中除去注视区域的剩余区域确定为目标展示图像的非注视区域;
    计算所述目标展示图像的非注视区域中各个像素点的颜色值;
    按照所述注视区域中各个像素点的颜色值和所述非注视区域中各个像素点的颜色值对所述注视区域和所述非注视区域进行渲染,得到所述目标展示图像。
  12. 如权利要求10所述的方法,其中,所述方法还包括:
    在图层重叠处理时,是利用公式B=I*M+O*(1-M)计算得到所述注视区域中的各个像素点的颜色值;
    其中,B表示所述注视区域中的目标像素点的颜色值,I表示所述第一子图像上与所述目标像素点具有相同图像位置的像素点的颜色值,O表示所述第二子图像上与所述目标像素点具有相同图像位置的像素点的颜色值,M表示所述遮罩图层上与所述目标像素点具有相同图像位置的像素点的遮罩数值。
  13. 如权利要求1所述的方法,其中,所述根据所述第一子图像和所述第二子图像生成目标展示图像,包括:
    根据所述第一子图像和所述第二子图像确定混合区域;
    根据混合区域中各个像素点距图像中心的距离,确定所述混合区域中各个像素点的颜色值;
    基于所述混合区域中各个像素点的颜色值、所述第二区域中各个像素点的颜色值和所述第一区域中各个像素点的颜色值生成目标展示图像。
  14. 如权利要求13所述的方法,其中,所述第一子图像包括边缘区域和核心区域,所述核心区域是根据注视点和目标视场角确定;所述根据第二子图像和第一子图像确定混合区域,包括:
    在所述第二子图像中确定参考区域,所述参考区域覆盖所述第一子图像在所述第二子图像中的对应区域;
    将所述参考区域中除去所述核心区域的部分确定为所述混合区域。
  15. 如权利要求13所述的方法,其中,所述根据混合区域中各个像素点距图像中心的距离,确定所述混合区域中各个像素点的颜色值,包括:
    根据所述参考区域的半径和所述混合区域中每个目标像素点距图像中心的距离,确定每个所述目标像素点的参考颜色值;
    获取第一子图像中与所述混合区域中每个所述目标像素点具有相同图像位置的像素点的颜色值,并获取第二子图像中与混合区域中每个所述目标像素点具有相同图像位置的像素点的颜色值;
    将根据第一子图像中与所述混合区域中每个所述目标像素点具有相同图像位置的像素点的颜色值、第二子图像中与混合区域中每个所述目标像素点具有相同图像位置的像素点的颜色值,计算所述混合区域中每个目标像素点的颜色值,以得到所述混合区域中各个像素点的颜色值。
  16. 如权利要求13所述的方法,其中,所述基于所述混合区域中各个像素点的颜色值、所述第二区域中各个像素点的颜色值和所述第一区域中各个像素点的颜色值生成目标展示图像,包括:
    按照所述混合区域中各个像素点的颜色值对所述混合区域进行渲染;
    按照所述核心区域中各个像素点在所述第一子图像中的颜色值对所述核心区域进行渲染;
    对于所述第二子图像中除去所述核心区域和所述混合区域的剩余部分,按照所述剩余部分中各个像素点在所述第二子图像中的颜色值进行渲染,得到所述目标展示图像。
  17. 一种图像渲染方法,由图像处理设备执行,包括:
    获取当前场景的初始图像,并确定所述初始图像上的第一区域和第二区域;
    基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得 到第一子图像;
    基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像;
    生成遮罩图层,所述遮罩图层的大小为第二区域中除去第一区域的剩余区域;
    根据所述第一子图像、所述第二子图像以及所述遮罩图层生成目标展示图像;
    其中,所述第一渲染规则和所述第二渲染规则不相同。
  18. 根据权利要求17所述的方法,其中,所述根据所述第一子图像、所述第二子图像以及所述遮罩图层生成目标展示图像,包括:
    根据所述第一子图像在所述初始图像中的位置以及所述遮罩图层的大小,将所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理,生成目标展示图像,所述目标展示图像包括注视区域和非注视区域;
    其中,所述注视区域中的像素点的颜色值是基于所述第一子图像的像素点的颜色值计算得到的;
    所述非注视区域为所述第二区域与所述遮罩图层叠加处理后形成的重叠区域,所述非注视区域的像素点的颜色值是根据所述第二子图像和所述遮罩图层的像素点颜色值确定的。
  19. 根据权利要求18所述的方法,其中,所述根据所述第一子图像在所述初始图像中的位置以及所述遮罩图层的大小,将所述第一子图像、所述第二子图像以及所述遮罩图层进行图层叠加处理,生成目标展示图像,包括:
    确定所述第一子图像对应的注视区域;
    计算所述目标展示图像的注视区域上各个像素点的颜色值;
    将所述第二子图像上叠加所述遮罩图层,将叠加后的第二子图像中除去注视区域的剩余区域确定为目标展示图像的非注视区域;
    计算所述非注视区域中各个像素点的颜色值;
    按照所述注视区域中各个像素点的颜色值和所述非注视区域中各个像素点的颜色值对所述注视区域和所述非注视区域进行渲染,得到所述目标展示图像。
  20. 一种图像渲染装置,包括:
    获取单元,用于获取当前场景的初始图像;
    确定单元,用于确定所述初始图像上的第一区域和第二区域;
    渲染单元,用于基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像;
    所述渲染单元,还用于基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像;
    生成单元,用于根据所述第一子图像和所述第二子图像生成目标展示图像;
    其中,所述第一渲染规则和所述第二渲染规则不相同。
  21. 一种图像渲染装置,包括:
    获取单元,用于获取当前场景的初始图像;
    确定单元,用于确定所述初始图像上的第一区域和第二区域;
    渲染单元,用于基于第一渲染规则对所述初始图像中所述第一区域的图像数据进行渲染,得到第一子图像;
    所述渲染单元,还用于基于第二渲染规则对所述初始图像中所述第二区域的图像数据进行渲染,得到第二子图像;
    生成单元,用于生成遮罩图层,所述遮罩图层的大小为第二区域中除去第一 区域的剩余区域;
    所述生成单元,用于根据所述第一子图像、所述第二子图像以及所述遮罩图层生成目标展示图像;其中,所述第一渲染规则和所述第二渲染规则不相同。
  22. 一种图像处理设备,包括处理器和存储器,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求1-19任一项所述的方法。
  23. 一种非易失性计算机可读存储介质,该计算机存储介质中存储有计算机程序指令,该计算机程序指令被处理器执行时,用于执行如权利要求1-19任一项所述的方法。
PCT/CN2019/101802 2018-08-21 2019-08-21 一种图像渲染方法、装置及图像处理设备、存储介质 WO2020038407A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19850930.9A EP3757944A4 (en) 2018-08-21 2019-08-21 IMAGE PLAYBACK METHOD AND DEVICE, IMAGE PROCESSING DEVICE AND STORAGE MEDIUM
US17/066,707 US11295528B2 (en) 2018-08-21 2020-10-09 Image rendering method and apparatus, image processing device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810954469.7 2018-08-21
CN201810954469.7A CN109242943B (zh) 2018-08-21 2018-08-21 一种图像渲染方法、装置及图像处理设备、存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/066,707 Continuation US11295528B2 (en) 2018-08-21 2020-10-09 Image rendering method and apparatus, image processing device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020038407A1 true WO2020038407A1 (zh) 2020-02-27

Family

ID=65071016

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101802 WO2020038407A1 (zh) 2018-08-21 2019-08-21 一种图像渲染方法、装置及图像处理设备、存储介质

Country Status (4)

Country Link
US (1) US11295528B2 (zh)
EP (1) EP3757944A4 (zh)
CN (1) CN109242943B (zh)
WO (1) WO2020038407A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583147A (zh) * 2020-05-06 2020-08-25 北京字节跳动网络技术有限公司 图像处理方法、装置、设备及计算机可读存储介质
CN111598989A (zh) * 2020-05-20 2020-08-28 上海联影医疗科技有限公司 一种图像渲染参数设置方法、装置、电子设备及存储介质

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324601A (zh) * 2018-03-27 2019-10-11 京东方科技集团股份有限公司 渲染方法、计算机产品及显示装置
CN109242943B (zh) 2018-08-21 2023-03-21 腾讯科技(深圳)有限公司 一种图像渲染方法、装置及图像处理设备、存储介质
CN109741463B (zh) * 2019-01-02 2022-07-19 京东方科技集团股份有限公司 虚拟现实场景的渲染方法、装置及设备
CN109933268A (zh) * 2019-02-25 2019-06-25 昀光微电子(上海)有限公司 一种基于人眼视觉特征的近眼显示装置
WO2020173414A1 (zh) 2019-02-25 2020-09-03 昀光微电子(上海)有限公司 一种基于人眼视觉特征的近眼显示方法和装置
CN109886876A (zh) * 2019-02-25 2019-06-14 昀光微电子(上海)有限公司 一种基于人眼视觉特征的近眼显示方法
CN110321865A (zh) * 2019-07-09 2019-10-11 北京字节跳动网络技术有限公司 头部特效处理方法及装置、存储介质
CN110378914A (zh) * 2019-07-22 2019-10-25 北京七鑫易维信息技术有限公司 基于注视点信息的渲染方法及装置、系统、显示设备
CN112541512B (zh) * 2019-09-20 2023-06-02 杭州海康威视数字技术股份有限公司 一种图像集生成方法及装置
CN110706322B (zh) * 2019-10-17 2023-08-11 网易(杭州)网络有限公司 图像显示的方法、装置、电子设备及可读存储介质
CN110910509A (zh) * 2019-11-21 2020-03-24 Oppo广东移动通信有限公司 图像处理方法以及电子设备和存储介质
CN113129417A (zh) 2019-12-27 2021-07-16 华为技术有限公司 一种全景应用中图像渲染的方法及终端设备
CN111275803B (zh) * 2020-02-25 2023-06-02 北京百度网讯科技有限公司 3d模型渲染方法、装置、设备和存储介质
CN111768352B (zh) * 2020-06-30 2024-05-07 Oppo广东移动通信有限公司 图像处理方法及装置
CN114071150B (zh) * 2020-07-31 2023-06-16 京东方科技集团股份有限公司 图像压缩方法及装置、图像显示方法及装置和介质
CN112308767B (zh) * 2020-10-19 2023-11-24 武汉中科通达高新技术股份有限公司 一种数据展示方法、装置、存储介质以及电子设备
CN112465939B (zh) * 2020-11-25 2023-01-24 上海哔哩哔哩科技有限公司 全景视频渲染方法及系统
CN112634426B (zh) * 2020-12-17 2023-09-29 深圳万兴软件有限公司 多媒体数据显示的方法、电子设备及计算机存储介质
CN114064039A (zh) * 2020-12-22 2022-02-18 完美世界(北京)软件科技发展有限公司 一种渲染管线的创建方法、装置、存储介质及计算设备
CN112767518B (zh) * 2020-12-22 2023-06-06 北京淳中科技股份有限公司 虚拟动画特效制作方法、装置及电子设备
CN112822397B (zh) * 2020-12-31 2022-07-05 上海米哈游天命科技有限公司 游戏画面的拍摄方法、装置、设备及存储介质
CN112686939B (zh) * 2021-01-06 2024-02-02 腾讯科技(深圳)有限公司 景深图像的渲染方法、装置、设备及计算机可读存储介质
WO2023070387A1 (zh) * 2021-10-27 2023-05-04 深圳市大疆创新科技有限公司 一种图像处理方法、装置、拍摄设备及可移动平台
CN114661263B (zh) * 2022-02-25 2023-06-20 荣耀终端有限公司 一种显示方法、电子设备及存储介质
CN114900731B (zh) * 2022-03-31 2024-04-09 咪咕文化科技有限公司 视频清晰度切换方法及装置
CN114782612A (zh) * 2022-04-29 2022-07-22 北京字跳网络技术有限公司 图像渲染方法、装置、电子设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158235A1 (en) * 2006-12-31 2008-07-03 Reuven Bakalash Method of rendering pixel-composited images for a graphics-based application running on a host computing system embodying a parallel graphics rendering system
CN106484116A (zh) * 2016-10-19 2017-03-08 腾讯科技(深圳)有限公司 媒体文件的处理方法和装置
CN106780642A (zh) * 2016-11-15 2017-05-31 网易(杭州)网络有限公司 迷雾遮罩贴图的生成方法及装置
CN107203270A (zh) * 2017-06-06 2017-09-26 歌尔科技有限公司 Vr图像处理方法及装置
US20170337728A1 (en) * 2016-05-17 2017-11-23 Intel Corporation Triangle Rendering Mechanism
CN107392986A (zh) * 2017-07-31 2017-11-24 杭州电子科技大学 一种基于高斯金字塔和各向异性滤波的图像景深渲染方法
CN107516335A (zh) * 2017-08-14 2017-12-26 歌尔股份有限公司 虚拟现实的图形渲染方法和装置
CN107633497A (zh) * 2017-08-31 2018-01-26 成都通甲优博科技有限责任公司 一种图像景深渲染方法、系统及终端
CN109242943A (zh) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 一种图像渲染方法、装置及图像处理设备、存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10025060B2 (en) * 2015-12-08 2018-07-17 Oculus Vr, Llc Focus adjusting virtual reality headset
US10643296B2 (en) * 2016-01-12 2020-05-05 Qualcomm Incorporated Systems and methods for rendering multiple levels of detail
US10373592B2 (en) * 2016-08-01 2019-08-06 Facebook Technologies, Llc Adaptive parameters in image regions based on eye tracking information
US10255714B2 (en) * 2016-08-24 2019-04-09 Disney Enterprises, Inc. System and method of gaze predictive rendering of a focal area of an animation
GB201620351D0 (en) * 2016-11-30 2017-01-11 Jaguar Land Rover Ltd And Cambridge Entpr Ltd Multi-dimensional display
US10410349B2 (en) * 2017-03-27 2019-09-10 Microsoft Technology Licensing, Llc Selective application of reprojection processing on layer sub-regions for optimizing late stage reprojection power
CN107194890B (zh) * 2017-05-18 2020-07-28 上海兆芯集成电路有限公司 使用多分辨率改善图像质量的方法及装置
US10621784B2 (en) * 2017-09-29 2020-04-14 Sony Interactive Entertainment America Llc Venue mapping for virtual reality spectating of live events
US10553016B2 (en) * 2017-11-15 2020-02-04 Google Llc Phase aligned foveated rendering

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080158235A1 (en) * 2006-12-31 2008-07-03 Reuven Bakalash Method of rendering pixel-composited images for a graphics-based application running on a host computing system embodying a parallel graphics rendering system
US20170337728A1 (en) * 2016-05-17 2017-11-23 Intel Corporation Triangle Rendering Mechanism
CN106484116A (zh) * 2016-10-19 2017-03-08 腾讯科技(深圳)有限公司 媒体文件的处理方法和装置
CN106780642A (zh) * 2016-11-15 2017-05-31 网易(杭州)网络有限公司 迷雾遮罩贴图的生成方法及装置
CN107203270A (zh) * 2017-06-06 2017-09-26 歌尔科技有限公司 Vr图像处理方法及装置
CN107392986A (zh) * 2017-07-31 2017-11-24 杭州电子科技大学 一种基于高斯金字塔和各向异性滤波的图像景深渲染方法
CN107516335A (zh) * 2017-08-14 2017-12-26 歌尔股份有限公司 虚拟现实的图形渲染方法和装置
CN107633497A (zh) * 2017-08-31 2018-01-26 成都通甲优博科技有限责任公司 一种图像景深渲染方法、系统及终端
CN109242943A (zh) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 一种图像渲染方法、装置及图像处理设备、存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3757944A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583147A (zh) * 2020-05-06 2020-08-25 北京字节跳动网络技术有限公司 图像处理方法、装置、设备及计算机可读存储介质
CN111583147B (zh) * 2020-05-06 2023-06-06 北京字节跳动网络技术有限公司 图像处理方法、装置、设备及计算机可读存储介质
CN111598989A (zh) * 2020-05-20 2020-08-28 上海联影医疗科技有限公司 一种图像渲染参数设置方法、装置、电子设备及存储介质
CN111598989B (zh) * 2020-05-20 2024-04-26 上海联影医疗科技股份有限公司 一种图像渲染参数设置方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
US20210027541A1 (en) 2021-01-28
US11295528B2 (en) 2022-04-05
CN109242943A (zh) 2019-01-18
EP3757944A4 (en) 2021-09-01
EP3757944A1 (en) 2020-12-30
CN109242943B (zh) 2023-03-21

Similar Documents

Publication Publication Date Title
WO2020038407A1 (zh) 一种图像渲染方法、装置及图像处理设备、存储介质
WO2020192706A1 (zh) 物体三维模型重建方法及装置
US11284057B2 (en) Image processing apparatus, image processing method and storage medium
CN111243071A (zh) 实时三维人体重建的纹理渲染方法、系统、芯片、设备和介质
CN108573524B (zh) 基于渲染管线的交互式实时自由立体显示方法
Wang et al. Deeplens: Shallow depth of field from a single image
DE112020003794T5 (de) Tiefenbewusste Fotobearbeitung
KR102096730B1 (ko) 이미지 디스플레이 방법, 곡면을 가지는 불규칙 스크린을 제조하기 위한 방법 및 헤드-장착 디스플레이 장치
JP2019079552A (ja) イメージ形成における及びイメージ形成に関する改良
US20220284679A1 (en) Method and apparatus for constructing three-dimensional face mesh, device, and storage medium
US20230291884A1 (en) Methods for controlling scene, camera and viewing parameters for altering perception of 3d imagery
KR102386642B1 (ko) 이미지 처리 방법 및 장치, 전자 기기 및 저장 매체
EP3679513B1 (en) Techniques for providing virtual light adjustments to image data
US10957063B2 (en) Dynamically modifying virtual and augmented reality content to reduce depth conflict between user interface elements and video content
US10885651B2 (en) Information processing method, wearable electronic device, and processing apparatus and system
KR20220083830A (ko) 이미지 처리 방법 및 이미지 합성 방법, 이미지 처리 장치 및 이미지 합성 장치, 그리고 저장 매체
CN111047709A (zh) 一种双目视觉裸眼3d图像生成方法
JP2016081042A (ja) 画像アンチエイリアシング方法および装置
Liu et al. Stereo-based bokeh effects for photography
JP7387029B2 (ja) ソフトレイヤ化および深度認識インペインティングを用いた単画像3d写真技術
US10114447B2 (en) Image processing method and apparatus for operating in low-power mode
US11636578B1 (en) Partial image completion
CN114757861A (zh) 纹理图像融合方法、装置、计算机设备以及可读介质
JP2016057691A (ja) プログラム、情報処理装置、制御方法及び記録媒体
CN114637391A (zh) 基于光场的vr内容处理方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850930

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019850930

Country of ref document: EP

Effective date: 20200921

NENP Non-entry into the national phase

Ref country code: DE