CN112686939A - Depth image rendering method, device and equipment and computer readable storage medium - Google Patents

Depth image rendering method, device and equipment and computer readable storage medium Download PDF

Info

Publication number
CN112686939A
CN112686939A CN202110011428.6A CN202110011428A CN112686939A CN 112686939 A CN112686939 A CN 112686939A CN 202110011428 A CN202110011428 A CN 202110011428A CN 112686939 A CN112686939 A CN 112686939A
Authority
CN
China
Prior art keywords
image
rendered
rendering
canvas
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110011428.6A
Other languages
Chinese (zh)
Other versions
CN112686939B (en
Inventor
张积强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110011428.6A priority Critical patent/CN112686939B/en
Publication of CN112686939A publication Critical patent/CN112686939A/en
Application granted granted Critical
Publication of CN112686939B publication Critical patent/CN112686939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides a rendering method, a rendering device and a computer readable storage medium of a depth image; the method comprises the following steps: acquiring an image to be rendered and a mask image for dividing a transparent area and a fuzzy area of the image to be rendered; blurring the image to be rendered to obtain a corresponding blurred image; performing image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image; and performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered. By the method and the device, the power consumption can be greatly reduced while the high-efficiency depth-of-field effect is achieved.

Description

Depth image rendering method, device and equipment and computer readable storage medium
Technical Field
The present disclosure relates to image processing technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for rendering depth images.
Background
The depth of field effect is an important optical imaging characteristic of a camera lens, and is used as a very important artistic tool in photography to emphasize a photographed object so that a picture has a sense of gradation. In the depth of field effect in the related technology, a clear image (namely, an image to be rendered) is obtained, a depth map of a scene is obtained at the same time, then a fuzzy parameter is calculated by using the depth map, a part of area of the clear image is blurred according to the fuzzy parameter, the change that the image area of the image to be rendered is clear and the rest of area is blurred within a certain distance range is realized, and thus the depth of field effect of a sight line focusing object is achieved; however, this approach relies on the depth map, requiring additional rendering of the depth map, which greatly increases consumption.
Disclosure of Invention
The embodiment of the application provides a depth image rendering method, device and equipment and a computer readable storage medium, which can greatly reduce consumption while realizing an efficient depth effect.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a depth image rendering method, which comprises the following steps:
acquiring an image to be rendered and a mask image for dividing a transparent area and a fuzzy area of the image to be rendered;
blurring the image to be rendered to obtain a corresponding blurred image;
performing image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image;
and performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered.
An embodiment of the present application provides a depth image rendering device, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image to be rendered and a mask image for dividing a transparent area and a fuzzy area of the image to be rendered;
the blurring module is used for blurring the image to be rendered to obtain a corresponding blurred image;
the fusion module is used for carrying out image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image;
and the rendering module is used for performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered.
In the foregoing solution, before the obtaining of the image to be rendered, the apparatus further includes:
the device comprises a cache area creating module, a first cache area and a second cache area, wherein the cache area creating module is used for creating a command buffer area containing an instruction set, the command buffer area is provided with the first cache area and the second cache area, and the instruction set comprises an image acquisition instruction and an image blurring instruction;
correspondingly, the obtaining module is further configured to obtain an image to be rendered in response to the image obtaining instruction, and cache the image to be rendered in the first temporary cache region;
the blurring module is further configured to perform blurring processing on the image to be rendered in response to the image blurring instruction to obtain a corresponding blurred image, and cache the blurred image in the second temporary cache region.
In the foregoing solution, before performing image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fused image, the apparatus further includes:
the canvas creating module is used for creating the canvas of the virtual scene when the image to be rendered is the image to be rendered in the virtual scene;
transmitting the image to be rendered cached by the first temporary cache region, the blurred image cached by the second temporary cache region and the mask image into the canvas;
correspondingly, the fusion module is further configured to determine a channel value of a color channel of the mask image;
and in the canvas of the virtual scene, image fusion is carried out on the image to be rendered and the fuzzy image based on the channel value of the color channel of the mask image, so as to obtain a corresponding fusion image.
In the above scheme, the canvas creation module is further configured to obtain an attribute variable of a first map of the image to be rendered through the command buffer area, and transmit the attribute variable of the first map to the canvas material of the canvas;
acquiring an attribute variable of a second map of the blurred image through the command buffer area, and transmitting the attribute variable of the second map to the canvas material of the canvas;
and acquiring an attribute variable of a third map of the mask image, and transmitting the attribute variable of the third map to the canvas material of the canvas.
In the above scheme, the fusion module is further configured to perform image fusion on the image to be rendered and the blurred image through a channel value of a color channel of the mask image in a shader made of a canvas material of the canvas, so as to obtain a fusion image presented on the canvas.
In the above scheme, the fusion module is further configured to multiply the channel value of the color channel of the image to be rendered and the color channel of the mask image to obtain a transparent area image in the virtual scene canvas;
multiplying the blurred image by a reference channel value to obtain a non-transparent area image in a canvas of the virtual scene, wherein the sum of a channel value of the color channel and the reference channel value is equal to 1;
and carrying out image fusion on the transparent area image and the non-transparent area image to obtain a fused image presented on the canvas.
In the above scheme, the rendering module is further configured to render the image to be rendered into the canvas to obtain a first rendered image;
rendering the fused image into the first rendered image to obtain a second rendered image;
and acquiring a target rendering object from the image to be rendered, and rendering the target rendering object into the second rendering image to obtain a depth image corresponding to the image to be rendered.
In the foregoing solution, the rendering module is further configured to, when the image to be rendered includes at least two rendering objects, respectively obtain depth information and a rendering command of each rendering object in the image to be rendered;
sequencing rendering commands of rendering objects in the image to be rendered based on the depth information to obtain a corresponding rendering command sequence, and storing the rendering command sequence to the command buffer area;
and executing each rendering command in the command buffer area according to the rendering command sequence, and rendering each rendering object in the image to be rendered into the canvas to obtain a first rendering canvas image.
In the foregoing solution, the rendering module is further configured to render the target rendering object into the second rendering image to obtain a third rendering image;
rendering the rendering object made of the special effect material into the third rendering image to obtain a depth image corresponding to the image to be rendered.
In the above scheme, the rendering module is further configured to perform transparency identification on the image to be rendered to obtain a transparent object and a non-transparent object of the image to be rendered;
writing the depth information of the non-transparent object into the canvas to obtain a fourth rendering image;
when the number of the transparent objects is at least two, storing each transparent object and the fusion object to a transparent object queue of the command buffer area according to a preset sequence;
and writing the depth information of each transparent object and the fusion object in the transparent object queue into the fourth rendering image according to the preset sequence to obtain a depth image corresponding to the image to be rendered.
In the above scheme, the rendering module is further configured to obtain a rendering priority of the image to be rendered and a rendering priority of the fused image;
comparing the rendering priority of the image to be rendered with the rendering priority of the fused image to determine the rendering order of the image to be rendered and the fused image;
and according to the rendering sequence, performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered.
In the above scheme, the blur module is further configured to perform reduction processing on the image to be rendered to obtain a reduced image to be rendered;
performing pixel offset processing on each pixel in the reduced image to be rendered to obtain the image to be rendered after pixel offset;
and carrying out image fusion on the image to be rendered before the reduction processing and the image to be rendered after the pixel offset to obtain a corresponding fuzzy image.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the depth image rendering method provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the method for rendering depth images according to the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
the method comprises the steps of obtaining an image to be rendered and a mask image used for dividing a transparent area and a fuzzy area of the image to be rendered; blurring the image to be rendered to obtain a corresponding blurred image; performing image fusion on an image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image; performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered; therefore, the depth map of the image to be rendered does not need to be acquired, the depth map does not need to be rendered additionally, and the power consumption can be greatly reduced while the high-efficiency depth-of-field effect is achieved.
Drawings
Fig. 1 is a schematic diagram of an alternative architecture of a depth image rendering system according to an embodiment of the present disclosure;
fig. 2 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of an alternative method for rendering depth-of-field images according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of an alternative method for rendering depth-of-field images according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart illustrating an alternative method for image blur processing according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of an alternative method for rendering depth-of-field images according to an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart illustrating an alternative method for transferring an image into a canvas according to an embodiment of the present application;
FIG. 8 is a schematic flow chart illustrating an alternative method for image fusion according to an embodiment of the present disclosure;
fig. 9 is an alternative flowchart of a depth image rendering method according to an embodiment of the present disclosure;
fig. 10 is a schematic flowchart of an alternative method for rendering depth-of-field images according to an embodiment of the present disclosure;
fig. 11 is a schematic flowchart of an alternative method for rendering depth-of-field images according to an embodiment of the present disclosure;
fig. 12 is a schematic flowchart of an alternative method for rendering depth-of-field images according to an embodiment of the present disclosure;
fig. 13 is an alternative flowchart of a depth image rendering method according to an embodiment of the present disclosure;
fig. 14 is a schematic flowchart of an alternative method for rendering depth-of-field images according to an embodiment of the present disclosure;
fig. 15 is a schematic diagram of a canvas of a virtual scene provided in an embodiment of the present application;
FIG. 16 is a schematic diagram of an image mask according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a blurred image provided by an embodiment of the present application;
FIG. 18 is a schematic diagram of a shader setting interface according to an embodiment of the present application;
FIGS. 19A-19B are schematic views illustrating depth of field effects provided by embodiments of the present application;
fig. 20 is a schematic structural component diagram of a depth image rendering apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, reference is made to the term "first \ second …" merely to distinguish between similar objects and not to represent a particular ordering for the objects, it being understood that "first \ second …" may be interchanged in a particular order or sequence of orders as permitted to enable embodiments of the application described herein to be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Referring to fig. 1, fig. 1 is an alternative architecture diagram of a depth image rendering system 100 provided in this embodiment of the present application, in order to support an exemplary application, terminals (illustratively, a terminal 400-1 and a terminal 400-2) are connected to a server 200 through a network 300, where the network may be a wide area network or a local area network, or a combination of the two networks, and data transmission is implemented using a wireless link.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, a camera, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited.
In some embodiments, the terminal is used for acquiring an image to be rendered and a mask image used for dividing a transparent area and a fuzzy area of the image to be rendered; blurring the image to be rendered to obtain a corresponding blurred image; performing image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image; performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered; therefore, all operations are executed through the terminal, and the real-time acquisition of the depth-of-field image can be ensured.
In some embodiments, the terminal is configured to acquire an image to be rendered and a mask image for dividing the image to be rendered into a transparent region and a fuzzy region, and send the acquired image to be rendered and the mask image to the server 200; the server 200 performs fuzzy processing on the image to be rendered to obtain a corresponding fuzzy image; performing image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image; performing image rendering on the image to be rendered and the fusion image to obtain and return a depth-of-field image corresponding to the image to be rendered to the terminal; therefore, the fuzzy processing, the fusion processing and the rendering processing are all executed at the server side, the power consumption of the terminal can be reduced, and the efficient operation of the terminal is ensured.
Referring to fig. 2, fig. 2 is an optional schematic structural diagram of an electronic device 500 provided in the embodiment of the present application, in practical applications, the electronic device 500 may be the terminal 400-1, the terminal 400-2, or the server 200 in fig. 1, and the electronic device is the terminal 400-1 or 400-2 shown in fig. 1 as an example, so as to describe the electronic device that implements the depth image rendering method in the embodiment of the present application. The electronic device 500 shown in fig. 2 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the depth image rendering apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 illustrates a depth image rendering apparatus 555 stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: the acquisition module 5551, the blur module 5552, the fusion module 5553 and the rendering module 5554 are logical and thus may be arbitrarily combined or further split depending on the functions implemented.
The functions of the respective modules will be explained below.
In other embodiments, the depth image rendering Device provided in this embodiment may be implemented in hardware, and for example, the depth image rendering Device provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the depth image rendering method provided in this embodiment, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The method for rendering depth images provided by the embodiment of the present application will be described with reference to exemplary applications and implementations of the terminal provided by the embodiment of the present application.
Referring to fig. 3, fig. 3 is an alternative flowchart of a depth image rendering method according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
Step 101, a terminal acquires an image to be rendered and a mask image for dividing the image to be rendered into a transparent area and a fuzzy area.
Here, the image to be rendered may be an original clear image acquired by an image acquisition device (such as a camera) of the terminal, or an original clear image sent by another device or a server. The image mask is used for shielding an image layer (or any channel of the image layer) with any shape of an image to be rendered, so that the image to be rendered is divided into a transparent area and a fuzzy area.
For example, a black and white mask image can be created for the subsequent control of the blurring effect range in the image to be rendered, and the shape of the mask image can be drawn in a customized manner, wherein the white part in the black and white mask indicates the opaque area or the blurred image area in the expected final depth image, and the black part indicates the transparent area or the clear image area in the expected final depth image.
And 102, performing fuzzy processing on the image to be rendered to obtain a corresponding fuzzy image.
Before rendering the image to be rendered, the image to be rendered is blurred to obtain a corresponding blurred image.
In some embodiments, referring to fig. 4, fig. 4 is an optional flowchart of the depth image rendering method provided in the embodiment of the present application, and based on fig. 3, before performing step 101, the following steps may also be performed:
105, the terminal creates a command buffer area containing an instruction set, wherein the command buffer area is provided with a first temporary buffer area and a second temporary buffer area, and the instruction set comprises an image acquisition instruction and an image blurring instruction;
correspondingly, steps 101 to 102 can also be realized by the following steps:
step 1011, responding to the image acquisition instruction, acquiring the image to be rendered and the mask image, and caching the image to be rendered and the mask image to a first temporary cache region;
step 1021, responding to the image blurring instruction, performing blurring processing on the image to be rendered to obtain a corresponding blurred image, and caching the blurred image into the second temporary cache region.
Here, a series of instructions are added to a Command Buffer (Command Buffer), and are executed by a camera event or Graphics class when needed, so that control over a rendering process is realized, and a desired temporary effect can be generated in real time.
In some embodiments, referring to fig. 5, fig. 5 is an optional flowchart of the method for processing image blur provided in the embodiments of the present application, and step 102 shown in fig. 3 can be implemented by steps 201 and 203 shown in fig. 5:
step 201, performing reduction processing on an image to be rendered to obtain a reduced image to be rendered;
step 202, performing pixel offset processing on each pixel in the reduced image to be rendered to obtain the image to be rendered after pixel offset;
and 203, carrying out image fusion on the image to be rendered before the reduction processing and the image to be rendered after the pixel offset to obtain a corresponding fuzzy image.
In practical application, when the image to be rendered is subjected to the blurring processing, in order to provide image processing efficiency, the image to be rendered may be firstly subjected to the reduction processing, for example, the image to be rendered is reduced to 1/4, so as to obtain a reduced image to be rendered; then, performing pixel offset processing on each pixel in the reduced image to be rendered, for example, performing left-right, upper-lower offset on each pixel by the width of two pixels to obtain the image to be rendered after pixel offset; and finally, carrying out image fusion on the image to be rendered before the reduction processing (namely the initial image to be rendered) and the image to be rendered after the pixel offset according to a preset proportion, for example, superposing and mixing 40% of the initial image to be rendered and 60% of the image to be rendered after the pixel offset, and carrying out Gaussian blur operation to obtain a corresponding blurred image.
And 103, performing image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image.
In some embodiments, referring to fig. 6, fig. 6 is an optional flowchart of the depth image rendering method provided in the embodiment of the present application, and based on fig. 4, before performing step 103, the following steps may also be performed:
106, when the image to be rendered is the image to be rendered in the virtual scene, creating a canvas of the virtual scene;
here, the canvas creation of the virtual scene may be accomplished by using a model canvas of a preset shape (e.g., square) to be placed behind the target object in the image to be rendered in the virtual scene, and stretching and enlarging the model canvas to cover the entire screen size.
Step 107, transmitting the image to be rendered, the mask image and the blurred image cached in the second temporary cache region into a canvas;
accordingly, step 103 may be performed by steps 1031 to 1032 as follows:
step 1031, determining channel values of color channels of the mask image;
and 1032, performing image fusion on the image to be rendered and the blurred image in the canvas of the virtual scene based on the channel value of the color channel of the mask image to obtain a corresponding fusion image.
Here, the image to be rendered and the blurred image stored into the command buffer are transferred into the canvas of the virtual scene to perform a subsequent operation based on the canvas of the virtual scene.
In some embodiments, referring to fig. 7, fig. 7 is an optional flowchart of a method for transferring an image into a canvas according to an embodiment of the present application, and step 107 shown in fig. 6 may be implemented by steps 301 and 303 shown in fig. 7:
step 301, acquiring an attribute variable of a first map of an image to be rendered through a command buffer area, and transmitting the attribute variable of the first map to a canvas material of a canvas;
step 302, acquiring an attribute variable of a second map of the blurred image through a command buffer area, and transmitting the attribute variable of the second map to a canvas material of a canvas;
step 303, obtaining an attribute variable of a third map of the mask image, and transmitting the attribute variable of the third map to a canvas material of the canvas.
In practical implementation, acquiring an attribute variable of a first map of an image to be rendered and an attribute variable of a second map of a blurred image through a command buffer area, and setting the attribute variable of the first map and the attribute variable of the second map of the blurred image on corresponding variables so as to transmit the attribute variables to canvas materials of a canvas; for a drawn mask image, the mask image can be directly dragged to a chartlet property on the UI canvas of the canvas.
The material is the real physical property of the simulation object or object in the virtual scene, such as color, reflection, transparency, chartlet and the like, and determines that the planes appear in a specific mode when being colored, the graph endowed with the material is called as the chartlet, and the canvas of the created virtual scene can be rendered into rich scene pictures by chartlets by various methods.
In some embodiments, the terminal may perform image fusion on the image to be rendered and the blurred image in the virtual scene canvas based on the channel value of the color channel of the mask image in the following manner to obtain a corresponding fused image: in a shader made of the canvas material of the canvas, the image to be rendered and the blurred image are subjected to image fusion through the channel value of the color channel of the mask image, and a fusion image presented on the canvas is obtained.
In some embodiments, referring to fig. 8, fig. 8 is an optional flowchart of the method for image fusion provided in the embodiment of the present application, and step 1032 shown in fig. 6 can be implemented by steps 401 and 403 shown in fig. 8:
step 401, multiplying the image to be rendered and the channel value of the color channel of the mask image to obtain a transparent area image in the virtual scene canvas;
step 402, multiplying the blurred image by a reference channel value to obtain a non-transparent area image in a canvas of the virtual scene, wherein the sum of a channel value of a color channel and the reference channel value is equal to 1;
and 403, performing image fusion on the transparent area image and the non-transparent area image to obtain a fused image displayed on the canvas.
In practical applications, each image has one or more color channels, each color channel stores information of color elements in the image (i.e. channel values), colors in all color channels are superimposed and mixed to generate colors of pixels in the image, the default number of color channels in the image depends on the color mode, i.e. the color mode of an image will determine the number of color channels, for example, by default, bitmap mode, grayscale, bi-tonal and index color images have only one channel, RGB and Lab images have 3 channels, CMYK images have 4 channels, and so on.
Since the mask image is a gray scale image and the channel value of each color channel is the same, the channel value of the color channel of the mask image may be any one of the R channel value, the G channel value, or the B channel value of the mask image. Taking the R channel value of the mask image as the channel value of the color channel of the mask image as an example, in the actual implementation, in the shader made of the canvas material of the canvas, the obtained initial image to be rendered and the blurred image obtained by the step blurring are subjected to image fusion through the R channel value of the color channel of the mask image, so as to obtain a fused image presented on the canvas, and specifically, the formula may be: and (3) fusing an image to be rendered and a mask image R channel value + a blurred image (1-a mask image R channel value).
Step 104: and performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered.
In some embodiments, referring to fig. 9, fig. 9 is an optional flowchart of the method for rendering a depth image according to the embodiment of the present disclosure, and step 104 shown in fig. 6 can be implemented by steps 1041 and 1043 shown in fig. 9:
step 1041, rendering the image to be rendered into a canvas to obtain a first rendered image;
step 1042, rendering the fused image to a first rendered image to obtain a second rendered image;
and 1043, acquiring a target rendering object from the image to be rendered, and rendering the target rendering object into a second rendering image to obtain a depth image corresponding to the image to be rendered.
Before rendering the fused image presented in the canvas, adding a camera event to the created command buffer area, executing all operations before rendering the fused image in the canvas, rendering the object operated by the command buffer area after rendering the opaque object, and after rendering the fused image in the canvas, attaching a target rendering object (such as a role of the image to be rendered) in the image to be rendered on the canvas to obtain the image with the depth of field effect.
In some embodiments, referring to fig. 10, fig. 10 is an optional flowchart of the method for rendering a depth image according to the embodiment of the present application, and step 1041 shown in fig. 9 may be implemented by steps 501 and 503 shown in fig. 10:
step 501, when an image to be rendered comprises at least two rendering objects, respectively obtaining depth information and a rendering command of each rendering object in the image to be rendered;
502, sequencing rendering commands of rendering objects in an image to be rendered based on each depth information to obtain a corresponding rendering command sequence, and storing the rendering command sequence to a command buffer area;
in step 5013, executing each rendering command in the command buffer area according to the rendering command sequence, and rendering each rendering object in the image to be rendered into the canvas to obtain a first rendering canvas image.
And sequentially rendering the rendering objects from far to near of the rendering objects from the acquisition equipment.
In some embodiments, referring to fig. 11, fig. 11 is an optional flowchart of the method for rendering a depth image according to the embodiment of the present application, and when an image to be rendered includes a rendering object of a special effect material, step 1043 shown in fig. 9 may be implemented through steps 601 to 602 shown in fig. 11:
step 601, rendering the target rendering object into a second rendering image to obtain a third rendering image;
step 602, rendering the rendering object of the special effect material to a third rendering image to obtain a depth image corresponding to the image to be rendered.
Here, when the image to be rendered includes a rendering object of a special effect material, such as a particle special effect, the target rendering object may be rendered into the second rendering image first, and finally the rendering object of the special effect material may be rendered; therefore, the rendering object made of the special effect material is prevented from being covered by the blur in the depth effect, namely, the problem that the special effect object is covered by the depth effect in a blurred mode due to the fact that the depth is not large is solved.
In some embodiments, referring to fig. 12, fig. 12 is an optional flowchart of the method for rendering a depth image according to the embodiment of the present application, and step 104 shown in fig. 6 may be implemented by steps 1044 and 1047 shown in fig. 12:
step 1044 of performing transparency recognition on the image to be rendered to obtain a transparent object and a non-transparent object of the image to be rendered;
step 1045, writing the depth information of the non-transparent object into the canvas to obtain a fourth rendering image;
step 1046, when the number of the transparent objects is at least two, storing each transparent object and the fused image to a transparent object queue of the command buffer area according to a preset sequence;
step 1047, writing the depth information of each transparent object and the fusion object in the transparent object queue into the fourth rendering image according to a preset sequence, and obtaining a depth image corresponding to the image to be rendered.
When the number of the transparent objects is at least two, the transparent objects and the fusion objects are stored in a transparent object queue of a command buffer area according to a preset sequence, and finally the depth information of the transparent objects and the fusion objects in the transparent object queue is written in the canvas after the non-transparent objects are rendered according to the preset sequence to obtain a final depth image.
In some embodiments, referring to fig. 13, fig. 13 is an optional flowchart of the method for rendering a depth image according to the embodiment of the present application, and step 104 shown in fig. 3 can be further implemented by steps 1048 and 1050 shown in fig. 13:
step 1048, acquiring rendering priority of the image to be rendered and rendering priority of the fused image;
step 1049, comparing the rendering priority of the image to be rendered with the rendering priority of the fusion image to determine the rendering order of the image to be rendered and the fusion image;
and 1050, performing image rendering on the image to be rendered and the fusion image according to the rendering sequence to obtain a depth-of-field image corresponding to the image to be rendered.
Here, the higher the rendering priority, the earlier the corresponding image is rendered.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The depth-of-field image rendering method in the related art depends on the depth map, and the depth map needs to be rendered once additionally, so that the consumption is greatly increased; meanwhile, in virtual scene (such as game) application, particle special effects usually do not have depth information, so that a depth map cannot be rendered correctly, the special effects are affected by the depth effect and become fuzzy, and the problem of rendering errors occurs. In view of this, an embodiment of the present application provides a depth image rendering method, which uses a Command Buffer (Command Buffer) to modify a function of a rendering queue, blurs an object rendered before a transparent queue, and uses a mask map instead of a depth map to distinguish a blur range, so as to reduce power consumption while achieving an efficient depth effect, and solve a problem that a special effect object is blurred due to no depth being covered by the depth effect.
Referring to fig. 14, fig. 14 is an alternative flowchart of a depth image rendering method according to an embodiment of the present application, and the steps shown in fig. 14 will be described in detail.
Step 701, a canvas and a mask image of a virtual scene are created.
Referring to fig. 15, fig. 15 is a schematic diagram of a canvas of a virtual scene according to an embodiment of the present application, where a model canvas (also called a patch) with a preset shape (such as a square) is placed behind a character (i.e., a target object in the image to be rendered) in the virtual scene, and the model canvas is stretched and enlarged to cover the entire screen size, so that the canvas creation of the virtual scene can be completed.
Referring to fig. 16, fig. 16 is a schematic diagram of an image mask provided by an embodiment of the present application, and fig. 16 shows a black and white mask image for subsequently controlling a blurring effect range in an image to be rendered, where a white portion in the black and white mask indicates an opaque region or a blurred image region in a desired final depth image, and a black portion indicates a transparent region or a sharp image region in the desired final depth image.
Step 702, obtaining an image to be rendered in a current screen through a command buffer area, and caching the image to be rendered to a first temporary buffer area.
In actual implementation, a command buffer area containing an instruction set can be created, wherein the command buffer area is provided with a first temporary buffer area and a second temporary buffer area, and the instruction set comprises an image acquisition instruction and an image blurring instruction; and the terminal responds to the image acquisition instruction, acquires the image to be rendered, and caches the image to be rendered to the first temporary cache region.
And 703, blurring the image to be rendered through the command buffer area to obtain a corresponding blurred image, and caching the blurred image into a second temporary buffer area.
And the terminal responds to the image blurring instruction, performs blurring processing on the image to be rendered to obtain a corresponding blurred image, and caches the blurred image to the second temporary cache area.
Referring to fig. 17, fig. 17 is a schematic diagram of a blurred image provided in the embodiment of the present application, and when performing blurring processing, in order to provide image processing efficiency, an image to be rendered may be firstly reduced, for example, the image to be rendered is reduced to 1/4, so as to obtain a reduced image to be rendered; then, performing pixel offset processing on each pixel in the reduced image to be rendered, for example, performing left-right, upper-lower offset on each pixel by the width of two pixels to obtain the image to be rendered after pixel offset; and finally, performing image fusion on the image to be rendered before the reduction processing (namely the initial image to be rendered obtained in the step 702) and the image to be rendered after the pixel offset according to a preset proportion, for example, performing superposition mixing on 40% of the image to be rendered before the reduction processing and 60% of the image to be rendered after the pixel offset, and performing Gaussian blur operation to obtain a corresponding blurred image.
Step 704, transmitting the image to be rendered buffered by the first temporary buffer area, the blurred image buffered by the second temporary buffer area, and the mask image to the canvas of the virtual scene.
Here, the image to be rendered obtained in step 702, the blurred image obtained in step 703 and the mask image created in step 701 are introduced into the canvas of the virtual scene. In actual implementation, acquiring an attribute variable of a first map of an image to be rendered and an attribute variable of a second map of a blurred image through a command buffer area, setting the attribute variable of the first map and the attribute variable of the second map of the blurred image on corresponding variables, and transmitting the attribute variables to a canvas material of a canvas; for the drawn mask image, referring to fig. 18, fig. 18 is a schematic diagram of a shader setting interface provided in the embodiment of the present application, and the mask image is directly dragged to a chartlet attribute on a UI canvas of the canvas.
Step 705, performing image fusion on the image to be rendered and the blurred image in the canvas of the virtual scene based on the channel value of the color channel of the mask image to obtain a fused image presented on the canvas.
In practical applications, each image has one or more color channels, each color channel stores information of color elements in the image (i.e. channel values), colors in all color channels are superimposed and mixed to generate colors of pixels in the image, the default number of color channels in the image depends on the color mode, i.e. the color mode of an image will determine the number of color channels, for example, by default, bitmap mode, grayscale, bi-tonal and index color images have only one channel, RGB and Lab images have 3 channels, CMYK images have 4 channels, and so on.
Since the mask image is a gray image and the channel value of each color channel is the same, the R channel value of the mask image is used as the channel value of the color channel of the mask image, and in actual implementation, in a shader made of a canvas material of the canvas, the image to be rendered obtained in step 702 and the blurred image obtained in step 703 are subjected to image fusion through the R channel value of the color channel of the mask image, so as to obtain a fused image presented on the canvas, where a specific formula may be: and (3) fusing an image to be rendered and a mask image R channel value + a blurred image (1-a mask image R channel value).
Step 706, rendering the fused image set from the canvas to the top after the opaque object is rendered, and setting the fused image as the writing depth of the transparent object queue to obtain the depth image corresponding to the image to be rendered.
And performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered. In actual implementation, transparency identification can be performed on an image to be rendered first to obtain a transparent object and a non-transparent object of the image to be rendered; and then rendering the non-transparent objects, namely writing the depth information of the non-transparent objects into a canvas, setting the fusion objects obtained in the step 705 into a transparent object queue, wherein the transparent object queue comprises the transparent objects and the fusion images presented on the canvas, when the number of the transparent objects is at least two, storing the transparent objects and the fusion objects into the transparent object queue of the command buffer area according to a preset sequence, and finally writing the depth information of the transparent objects and the fusion objects in the transparent object queue into the canvas after the non-transparent objects are rendered according to the preset sequence to obtain the final depth image.
For example, before rendering the fused image in the canvas obtained in step 705, a camera event is added to the created command buffer area, all operations are executed before rendering the canvas, an object operated by the command buffer area is rendered after an opaque object, and after rendering of the fused image in the canvas is completed, a target rendering object (i.e., a clear character in fig. 15) in the image to be rendered is attached to the canvas, so that an image with a depth effect can be obtained.
When the image to be rendered includes the rendering object made of the special effect material, such as a particle special effect, whether the rendering object made of the special effect material is affected by the depth of field effect can be controlled by setting the rendering queue index number of the rendering object made of the special effect material, as described above, the fused image presented on the canvas is set to be the transparent object queue, and as the transparent object queue index number is usually 2450, the rendering queue index number of the rendering object made of the special effect material can be set to be larger than 2450, and as set to be 2700, the rendering of the rendering object made of the special effect material can be realized after the fused image presented on the canvas is rendered, so that the rendering object made of the special effect material is prevented from being covered by the blur in the depth of field effect, and the problem that the rendering object made of the special effect is not covered by the.
Referring to fig. 19A to 19B, fig. 19A to 19B are schematic views of depth effect provided in the embodiment of the present disclosure, where fig. 19A is a schematic view of an effect obtained by using a depth image rendering method based on a depth map in the related art, and fig. 19B is a schematic view of an effect obtained by using the depth image rendering method provided in the embodiment of the present disclosure, and compared with fig. 19A, fig. 19B illustrates a problem that a rendering object of a special effect material is not covered and blurred by the depth effect.
Through the mode, compared with a depth image rendering method based on a depth map, the depth image rendering method provided by the embodiment of the application does not need to render the depth maps of all objects in a scene, so that the times of operations (namely Draw calls) of a CPU (central processing unit) calling a graphic programming interface, such as DirectX or OpenGL, to command the GPU to render are greatly reduced, and the consumption is greatly reduced; in effect, the problem that the special-effect object is blurred due to the fact that the depth is not covered by the depth field effect is solved; and moreover, by using the self-defined mask image, the covering range can be adjusted by modifying the map or pulling the canvas position to freely and randomly control the far and near fuzzy degree in the fuzzy range, and the flexibility is high.
Continuing with the description of the exemplary structure of the depth image rendering apparatus 555 implemented as a software module provided in the embodiment of the present application, in some embodiments, referring to fig. 20, fig. 20 is a schematic structural component diagram of the depth image rendering apparatus provided in the embodiment of the present application, and as shown in fig. 20, the depth image rendering apparatus 555 provided in the embodiment of the present application includes:
an obtaining module 5551, configured to obtain an image to be rendered and a mask image for dividing a transparent region and a blurred region of the image to be rendered;
a blur module 5552, configured to perform blur processing on the image to be rendered to obtain a corresponding blurred image;
the fusion module 5553 is configured to perform image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image;
the rendering module 5554 is configured to perform image rendering on the image to be rendered and the fusion image, so as to obtain a depth-of-field image corresponding to the image to be rendered.
In some embodiments, before the acquiring the image to be rendered, the apparatus further comprises:
the device comprises a cache area creating module, a first cache area and a second cache area, wherein the cache area creating module is used for creating a command buffer area containing an instruction set, the command buffer area is provided with the first cache area and the second cache area, and the instruction set comprises an image acquisition instruction and an image blurring instruction;
correspondingly, the obtaining module is further configured to obtain an image to be rendered in response to the image obtaining instruction, and cache the image to be rendered in the first temporary cache region;
the blurring module is further configured to perform blurring processing on the image to be rendered in response to the image blurring instruction to obtain a corresponding blurred image, and cache the blurred image in the second temporary cache region.
In some embodiments, before the image fusion is performed on the image to be rendered and the blurred image through the mask image to obtain a corresponding fused image, the apparatus further includes:
the canvas creating module is used for creating the canvas of the virtual scene when the image to be rendered is the image to be rendered in the virtual scene;
transmitting the image to be rendered cached by the first temporary cache region, the blurred image cached by the second temporary cache region and the mask image into the canvas;
correspondingly, the fusion module is further configured to determine a channel value of a color channel of the mask image;
and in the canvas of the virtual scene, image fusion is carried out on the image to be rendered and the fuzzy image based on the channel value of the color channel of the mask image, so as to obtain a corresponding fusion image.
In some embodiments, the canvas creation module is further configured to obtain, through the command buffer, an attribute variable of a first map of the image to be rendered, and transfer the attribute variable of the first map to a canvas material of the canvas;
acquiring an attribute variable of a second map of the blurred image through the command buffer area, and transmitting the attribute variable of the second map to the canvas material of the canvas;
and acquiring an attribute variable of a third map of the mask image, and transmitting the attribute variable of the third map to the canvas material of the canvas.
In some embodiments, the fusion module is further configured to perform image fusion on the image to be rendered and the blurred image in a shader made of a canvas material of the canvas according to a channel value of a color channel of the mask image, so as to obtain a fused image presented on the canvas.
In some embodiments, the fusion module is further configured to multiply the to-be-rendered image by a channel value of a color channel of the mask image, so as to obtain a transparent area image in the virtual scene canvas;
multiplying the blurred image by a reference channel value to obtain a non-transparent area image in a canvas of the virtual scene, wherein the sum of a channel value of the color channel and the reference channel value is equal to 1;
and carrying out image fusion on the transparent area image and the non-transparent area image to obtain a fused image presented on the canvas.
In some embodiments, the rendering module is further configured to render the image to be rendered into the canvas to obtain a first rendered image;
rendering the fused image into the first rendered image to obtain a second rendered image;
and acquiring a target rendering object from the image to be rendered, and rendering the target rendering object into the second rendering image to obtain a depth image corresponding to the image to be rendered.
In some embodiments, the rendering module is further configured to, when the image to be rendered includes at least two rendering objects, respectively obtain depth information and a rendering command of each rendering object in the image to be rendered;
sequencing rendering commands of rendering objects in the image to be rendered based on the depth information to obtain a corresponding rendering command sequence, and storing the rendering command sequence to the command buffer area;
and executing each rendering command in the command buffer area according to the rendering command sequence, and rendering each rendering object in the image to be rendered into the canvas to obtain a first rendering canvas image.
In some embodiments, the rendering module is further configured to render the target rendering object into the second rendering image, resulting in a third rendering image;
rendering the rendering object made of the special effect material into the third rendering image to obtain a depth image corresponding to the image to be rendered.
In some embodiments, the rendering module is further configured to perform transparency recognition on the image to be rendered, so as to obtain a transparent object and a non-transparent object of the image to be rendered;
writing the depth information of the non-transparent object into the canvas to obtain a fourth rendering image;
when the number of the transparent objects is at least two, storing each transparent object and the fusion object to a transparent object queue of the command buffer area according to a preset sequence;
and writing the depth information of each transparent object and the fusion object in the transparent object queue into the fourth rendering image according to the preset sequence to obtain a depth image corresponding to the image to be rendered.
In some embodiments, the rendering module is further configured to obtain a rendering priority of the image to be rendered and a rendering priority of the fused image;
comparing the rendering priority of the image to be rendered with the rendering priority of the fused image to determine the rendering order of the image to be rendered and the fused image;
and according to the rendering sequence, performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered.
In some embodiments, the blur module is further configured to perform reduction processing on the image to be rendered to obtain a reduced image to be rendered;
performing pixel offset processing on each pixel in the reduced image to be rendered to obtain the image to be rendered after pixel offset;
and carrying out image fusion on the image to be rendered before the reduction processing and the image to be rendered after the pixel offset to obtain a corresponding fuzzy image.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the depth image rendering method described in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which, when executed by a processor, cause the processor to execute the method for rendering a depth image provided by the embodiments of the present application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method for rendering a depth-of-field image, the method comprising:
acquiring an image to be rendered and a mask image for dividing a transparent area and a fuzzy area of the image to be rendered;
blurring the image to be rendered to obtain a corresponding blurred image;
performing image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image;
and performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered.
2. The method of claim 1, wherein prior to obtaining the image to be rendered, the method further comprises:
creating a command buffer area containing an instruction set, wherein the command buffer area is provided with a first temporary buffer area and a second temporary buffer area, and the instruction set comprises an image acquisition instruction and an image blurring instruction;
correspondingly, the acquiring of the image to be rendered and the mask image for dividing the transparent area and the fuzzy area of the image to be rendered includes:
responding to the image acquisition instruction, acquiring an image to be rendered and a mask image for dividing a transparent area and a fuzzy area of the image to be rendered, and caching the image to be rendered and the mask image to the first temporary cache area;
the step of performing fuzzy processing on the image to be rendered to obtain a corresponding fuzzy image comprises the following steps:
and responding to the image blurring instruction, performing blurring processing on the image to be rendered to obtain a corresponding blurred image, and caching the blurred image to the second temporary cache region.
3. The method of claim 2, wherein before the image fusion of the image to be rendered and the blurred image through the mask image to obtain the corresponding fused image, the method further comprises:
when the image to be rendered is an image to be rendered in a virtual scene, creating a canvas of the virtual scene;
transmitting the image to be rendered cached by the first temporary cache region, the blurred image cached by the second temporary cache region and the mask image into the canvas;
correspondingly, the image fusion is performed on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image, and the method includes:
determining channel values for color channels of the mask image;
and in the canvas of the virtual scene, image fusion is carried out on the image to be rendered and the fuzzy image based on the channel value of the color channel of the mask image, so as to obtain a corresponding fusion image.
4. The method of claim 3, wherein passing the image to be rendered buffered by the first temporary buffer, the blurred image buffered by the second temporary buffer, and the mask image into the canvas comprises:
acquiring an attribute variable of a first map of the image to be rendered through the command buffer area, and transmitting the attribute variable of the first map to a canvas material of the canvas;
acquiring an attribute variable of a second map of the blurred image through the command buffer area, and transmitting the attribute variable of the second map to the canvas material of the canvas;
and acquiring an attribute variable of a third map of the mask image, and transmitting the attribute variable of the third map to the canvas material of the canvas.
5. The method of claim 3, wherein the image fusing the image to be rendered and the blurred image based on the channel values of the color channels of the mask image in the virtual scene canvas to obtain a corresponding fused image, comprises:
and in a shader made of the canvas material of the canvas, performing image fusion on the image to be rendered and the blurred image through a channel value of a color channel of the mask image to obtain a fused image presented on the canvas.
6. The method of claim 5, wherein the image fusing the image to be rendered and the blurred image through the channel values of the color channels of the mask image to obtain a fused image presented on the virtual scene canvas, comprising:
multiplying the to-be-rendered image by the channel value of the color channel of the mask image to obtain a transparent area image in the virtual scene canvas;
multiplying the blurred image by a reference channel value to obtain a non-transparent area image in a canvas of the virtual scene, wherein the sum of a channel value of the color channel and the reference channel value is equal to 1;
and carrying out image fusion on the transparent area image and the non-transparent area image to obtain a fused image presented on the canvas.
7. The method of claim 3, wherein the image rendering the image to be rendered and the fused image to obtain a depth image corresponding to the image to be rendered comprises:
rendering the image to be rendered into the canvas to obtain a first rendered image;
rendering the fused image into the first rendered image to obtain a second rendered image;
and acquiring a target rendering object from the image to be rendered, and rendering the target rendering object into the second rendering image to obtain a depth image corresponding to the image to be rendered.
8. The method of claim 7, wherein rendering the image to be rendered into a canvas resulting in a first rendered image comprises:
when the image to be rendered comprises at least two rendering objects, respectively acquiring depth information and a rendering command of each rendering object in the image to be rendered;
sequencing rendering commands of rendering objects in the image to be rendered based on the depth information to obtain a corresponding rendering command sequence, and storing the rendering command sequence to the command buffer area;
and executing each rendering command in the command buffer area according to the rendering command sequence, and rendering each rendering object in the image to be rendered into the canvas to obtain a first rendering canvas image.
9. The method of claim 7, wherein the image to be rendered comprises a rendering object of a special effect material, and the rendering the target rendering object into the second rendering image to obtain a depth image corresponding to the image to be rendered comprises:
rendering the target rendering object into the second rendering image to obtain a third rendering image;
rendering the rendering object made of the special effect material into the third rendering image to obtain a depth image corresponding to the image to be rendered.
10. The method of claim 3, wherein the image rendering the image to be rendered and the fused image to obtain a depth image corresponding to the image to be rendered comprises:
performing transparency identification on the image to be rendered to obtain a transparent object and a non-transparent object of the image to be rendered;
writing the depth information of the non-transparent object into the canvas to obtain a fourth rendering image;
when the number of the transparent objects is at least two, storing each transparent object and the fusion object to a transparent object queue of the command buffer area according to a preset sequence;
and writing the depth information of each transparent object and the fusion object in the transparent object queue into the fourth rendering image according to the preset sequence to obtain a depth image corresponding to the image to be rendered.
11. The method of claim 1, wherein image rendering is performed on the image to be rendered and the fused image to obtain a depth image corresponding to the image to be rendered, and the method comprises:
acquiring the rendering priority of the image to be rendered and the rendering priority of the fused image;
comparing the rendering priority of the image to be rendered with the rendering priority of the fused image to determine the rendering order of the image to be rendered and the fused image;
and according to the rendering sequence, performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered.
12. The method of claim 1, wherein the blurring the image to be rendered to obtain a corresponding blurred image comprises:
reducing the image to be rendered to obtain a reduced image to be rendered;
performing pixel offset processing on each pixel in the reduced image to be rendered to obtain the image to be rendered after pixel offset;
and carrying out image fusion on the image to be rendered before the reduction processing and the image to be rendered after the pixel offset to obtain a corresponding fuzzy image.
13. An apparatus for rendering a depth image, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring an image to be rendered and a mask image for dividing a transparent area and a fuzzy area of the image to be rendered;
the blurring module is used for blurring the image to be rendered to obtain a corresponding blurred image;
the fusion module is used for carrying out image fusion on the image to be rendered and the blurred image through the mask image to obtain a corresponding fusion image;
and the rendering module is used for performing image rendering on the image to be rendered and the fusion image to obtain a depth-of-field image corresponding to the image to be rendered.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor configured to implement the method of rendering depth images of any of claims 1 to 12 when executing executable instructions stored in the memory.
15. A computer-readable storage medium storing executable instructions for implementing the method for rendering the depth image according to any one of claims 1 to 12 when executed by a processor.
CN202110011428.6A 2021-01-06 2021-01-06 Depth image rendering method, device, equipment and computer readable storage medium Active CN112686939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110011428.6A CN112686939B (en) 2021-01-06 2021-01-06 Depth image rendering method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110011428.6A CN112686939B (en) 2021-01-06 2021-01-06 Depth image rendering method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112686939A true CN112686939A (en) 2021-04-20
CN112686939B CN112686939B (en) 2024-02-02

Family

ID=75457425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110011428.6A Active CN112686939B (en) 2021-01-06 2021-01-06 Depth image rendering method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112686939B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419233A (en) * 2021-12-31 2022-04-29 网易(杭州)网络有限公司 Model generation method and device, computer equipment and storage medium
CN115546075A (en) * 2022-12-02 2022-12-30 成都智元汇信息技术股份有限公司 Method and device for dynamically enhancing display based on column data labeling area

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075304A1 (en) * 2010-09-28 2012-03-29 Munkberg Carl J Backface Culling for Motion Blur and Depth of Field
US20150002545A1 (en) * 2013-06-28 2015-01-01 Canon Kabushiki Kaisha Variable blend width compositing
CN107633497A (en) * 2017-08-31 2018-01-26 成都通甲优博科技有限责任公司 A kind of image depth rendering intent, system and terminal
CN109242943A (en) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 A kind of image rendering method, device and image processing equipment, storage medium
CN110570505A (en) * 2019-09-11 2019-12-13 腾讯科技(深圳)有限公司 image rendering method, device and equipment and storage medium
CN110610526A (en) * 2019-08-12 2019-12-24 江苏大学 Method for segmenting monocular portrait and rendering depth of field based on WNET
CN111242838A (en) * 2020-01-09 2020-06-05 腾讯科技(深圳)有限公司 Blurred image rendering method and device, storage medium and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075304A1 (en) * 2010-09-28 2012-03-29 Munkberg Carl J Backface Culling for Motion Blur and Depth of Field
US20150002545A1 (en) * 2013-06-28 2015-01-01 Canon Kabushiki Kaisha Variable blend width compositing
CN107633497A (en) * 2017-08-31 2018-01-26 成都通甲优博科技有限责任公司 A kind of image depth rendering intent, system and terminal
CN109242943A (en) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 A kind of image rendering method, device and image processing equipment, storage medium
CN110610526A (en) * 2019-08-12 2019-12-24 江苏大学 Method for segmenting monocular portrait and rendering depth of field based on WNET
CN110570505A (en) * 2019-09-11 2019-12-13 腾讯科技(深圳)有限公司 image rendering method, device and equipment and storage medium
CN111242838A (en) * 2020-01-09 2020-06-05 腾讯科技(深圳)有限公司 Blurred image rendering method and device, storage medium and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419233A (en) * 2021-12-31 2022-04-29 网易(杭州)网络有限公司 Model generation method and device, computer equipment and storage medium
CN115546075A (en) * 2022-12-02 2022-12-30 成都智元汇信息技术股份有限公司 Method and device for dynamically enhancing display based on column data labeling area

Also Published As

Publication number Publication date
CN112686939B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
WO2021169307A1 (en) Makeup try-on processing method and apparatus for face image, computer device, and storage medium
CN105528207A (en) Virtual reality system, and method and apparatus for displaying Android application images therein
CN112686939B (en) Depth image rendering method, device, equipment and computer readable storage medium
US20230074060A1 (en) Artificial-intelligence-based image processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN111047506A (en) Environmental map generation and hole filling
CN112162672A (en) Information flow display processing method and device, electronic equipment and storage medium
KR20150106846A (en) Improvements in and relating to rendering of graphics on a display device
CN106447756A (en) Method and system for generating a user-customized computer-generated animation
CN107861711B (en) Page adaptation method and device
CN113110731A (en) Method and device for generating media content
CN109447931B (en) Image processing method and device
CN113470153A (en) Rendering method and device of virtual scene and electronic equipment
CN110457098B (en) Page local image sharing method, device, equipment and storage medium
WO2023125132A1 (en) Special effect image processing method and apparatus, and electronic device and storage medium
CN115546410A (en) Window display method and device, electronic equipment and storage medium
EP4002289A1 (en) Picture processing method and device, storage medium, and electronic apparatus
CN103795925A (en) Interactive main-and-auxiliary-picture real-time rendering photographing method
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
CN114049425B (en) Illumination simulation method, device, equipment and storage medium in image
CN109003225A (en) A kind of more palace trrellis diagram piece treating method and apparatus and a kind of electronic equipment
US11778007B2 (en) Server, method and user device for providing virtual reality content
CN116483359B (en) New mimicry drawing method and device, electronic equipment and readable storage medium
CN111179166B (en) Image processing method, device, equipment and computer readable storage medium
CN116578226A (en) Image processing method, apparatus, device, storage medium, and program product
Viljanen Implementing VR feature camera on Android platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40042017

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant