CN115311397A - Method, apparatus, device and storage medium for image rendering - Google Patents

Method, apparatus, device and storage medium for image rendering Download PDF

Info

Publication number
CN115311397A
CN115311397A CN202210952649.8A CN202210952649A CN115311397A CN 115311397 A CN115311397 A CN 115311397A CN 202210952649 A CN202210952649 A CN 202210952649A CN 115311397 A CN115311397 A CN 115311397A
Authority
CN
China
Prior art keywords
rendering
augmented reality
region
regions
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210952649.8A
Other languages
Chinese (zh)
Inventor
王奥宇
陈斌斌
覃裕文
张璐薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210952649.8A priority Critical patent/CN115311397A/en
Publication of CN115311397A publication Critical patent/CN115311397A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

According to an embodiment of the present disclosure, a method, an apparatus, a device, and a storage medium for image rendering are provided. The method described herein comprises: segmenting the captured image into a plurality of regions, the plurality of regions corresponding to a plurality of augmented reality categories, the plurality of augmented reality categories each having a corresponding rendering priority; and rendering the plurality of regions based on the rendering priorities such that rendering a second region of the plurality of regions is avoided in rendering a first augmented reality effect for a first region of the plurality of regions, wherein a first augmented reality category associated with the first region is different from a second augmented reality category associated with the second region. In this way, multiple augmented reality effects can be sequentially rendered for multiple regions of an image based on rendering priorities, and occlusion or overlap between different augmented reality effects is avoided.

Description

Method, apparatus, device and storage medium for image rendering
Technical Field
Example embodiments of the present disclosure relate generally to the field of computers and, in particular, to methods, apparatuses, devices and computer-readable storage media for image rendering.
Background
Augmented Reality (AR) is a technology that combines the real world with virtual content, and has been widely used in many fields such as artificial intelligence, robotics, education, medical treatment, movie production, and the like. With the AR technology, virtual content can be rendered superimposed on real content. For example, an AR scene may be produced with reference to a scene in the real world, thereby giving an immersive, sensation to a user viewing the AR scene. In order to provide a rich visual experience to the user, more than one AR special effect may need to be presented for different scenes. In some cases, the character objects may also be presented as part of the scene, even interacting with one or more AR effects in the scene.
Disclosure of Invention
In a first aspect of the disclosure, an image rendering method is provided. The method comprises the following steps: segmenting the captured image into a plurality of regions, the plurality of regions corresponding to a plurality of Augmented Reality (AR) categories, the plurality of AR categories each having a corresponding rendering priority; and rendering the plurality of regions based on the rendering priorities such that rendering a second region of the plurality of regions is avoided in rendering a first AR effect for a first region of the plurality of regions, wherein a first AR category associated with the first region is different from a second AR category associated with the second region.
In a second aspect of the present disclosure, an apparatus for image rendering is provided. The device comprises: an image segmentation module configured to segment a captured image into a plurality of regions, the plurality of regions corresponding to a plurality of Augmented Reality (AR) categories, the plurality of AR categories each having a corresponding rendering priority; and an image rendering module configured to render the plurality of regions based on rendering priorities such that, in rendering a first AR effect for a first region of the plurality of regions, rendering of a second region of the plurality of regions is avoided, wherein a first AR category associated with the first region is different from a second AR category associated with the second region, the rendering priority of the second AR category being different from the rendering priority of the first AR category.
In a third aspect of the disclosure, an electronic device is provided. The electronic device comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by the at least one processing unit, cause the electronic device to perform the method of the first aspect.
In a fourth aspect of the disclosure, a computer-readable storage medium is provided. The medium has stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
It should be understood that the statements herein set forth in this summary are not intended to limit the essential features or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters denote like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow diagram of an image rendering process according to some embodiments of the present disclosure;
FIG. 3 illustrates a schematic diagram of a user interface for image rendering, according to some embodiments of the present disclosure;
FIG. 4 shows a schematic block diagram of an apparatus for image rendering according to some embodiments of the present disclosure; and
FIG. 5 illustrates a block diagram of a device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and "comprise," and similar language, are to be construed as open-ended, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
It will be appreciated that the data involved in the subject technology, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the corresponding laws and regulations and related regulations.
It is understood that, before the technical solutions disclosed in the embodiments of the present disclosure are used, the user should be informed of the type, the use range, the use scene, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant laws and regulations and obtain the authorization of the user.
For example, in response to receiving a user's active request, prompt information is sent to the user to explicitly prompt the user that the requested operation to be performed would require acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the technical solution of the present disclosure, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request of the user, the prompt information is sent to the user, for example, a pop-up window manner may be used, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user selecting "agree" or "disagree" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
AR technology has found widespread use in numerous fields. As an exemplary application, in video capture and production scenes, a user may superimpose AR special effects on an image to enrich the visual effect. For example, a user may add virtual objects to a rendered image via AR technology, may also render character objects in a virtual scene, and even interact with one or more AR effects in the scene. In some cases, the images taken by the user are dynamic rather than static, e.g., the images may change with the orientation, position of the user's camera. In this case, it may be desirable to present an AR effect that varies accordingly. Furthermore, it may be desirable to present more than one AR effect in the same image.
In conventional image rendering techniques, post-processing is applied to the AR effect, which may result in one portion of the AR effect being covered or occluded by another portion of the AR effect. In order to make multiple AR effects coexist in the same image, a single AR effect may be rendered in the original image, respectively, and then a plurality of intermediate images may be superimposed together, thereby obtaining a final image. This approach can produce a large amount of redundant data and there may be instances where multiple AR effects overlap in the final image. If the image also has a character object, further processing is needed according to the shielding relation between the character object and the AR special effect, so that the rendering complexity is increased. Accordingly, improvements in image rendering techniques are desired.
According to the embodiment of the disclosure, template test is adopted for area in the image for special effect rendering. After zone rendering is completed, foreground special effects such as virtual objects and character objects are rendered in a post-processing manner. In this way, multiple AR effects can be rendered in the same image without generating redundant data, and occlusion and overlay between the multiple AR effects are avoided.
Embodiments of the present disclosure will be described below in connection with exemplary application scenarios. However, it should be understood that the provided image rendering scheme is applicable to a variety of scenarios involving AR technology, including, but not limited to, artificial intelligence, medical applications, mapping, movie animation, and so forth.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure can be implemented. The example environment 100 may include a terminal device 110 and an editing device 120. It should be understood that in some embodiments, terminal device 110 and editing device 120 may be implemented as separate devices as shown in fig. 1. However, depending on the device capabilities, in other embodiments, the terminal device 110 and the editing device 120 may be integrated as a single electronic device. Accordingly, embodiments of the disclosure are not limited in this regard.
Terminal device 110 may capture and present images. In embodiments of the present disclosure, the images may include still images as well as moving images, e.g., video images. The terminal device 110 may have installed thereon an application 112 for image capturing and production. The application 112 may provide virtual special effects for image rendering. The virtual special effect may be an effect, for example made on the basis of AR technology, which is also referred to as AR effect in the following. In embodiments of the present disclosure, the AR effect includes, but is not limited to, a background effect, a foreground effect, and the like.
Accordingly, the terminal device 110 may present the AR effect on the captured image through the application 112. As shown in fig. 1, the terminal device 110 captures an original image 102 in which a human object 104 is present. The terminal device 110 may add the background effects 131 to 133 and 141 to 145 and the foreground effect 106 to the image 102 via the application 112, which in turn results in the image 122 for rendering.
Terminal device 110 may communicate with editing device 120. In some embodiments, editing device 120 may be used to render images. Additionally or alternatively, the editing device 120 may edit a virtual special effect used to render the image. In such embodiments, the terminal device 110 may obtain the virtual special effect from the editing device 120 and add it to the rendered image through the application 112.
Alternatively, in other embodiments, the terminal device 110 may send the original image 102 to the editing device 120 for rendering a virtual special effect. Accordingly, terminal device 110 obtains rendered image 122 from editing device 120 for presentation.
The terminal device 110 may be any type of mobile terminal, fixed terminal, or portable terminal including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, personal Communication System (PCS) device, personal navigation device, personal Digital Assistant (PDA), audio/video player, digital camera/camcorder, positioning device, television receiver, radio broadcast receiver, electronic book device, gaming device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, terminal device 110 can also support any type of interface to the user (such as "wearable" circuitry, etc.). Editing device 120 is a computing system/server of various types capable of providing computing power, including but not limited to mainframes, edge computing nodes, electronic devices in a cloud environment, and so forth.
It should be understood that the description of the structure and function of environment 100 is for exemplary purposes only and does not imply any limitation as to the scope of the disclosure.
Embodiments will be described in the following description with reference to several example states of a User Interface (UI). It should be understood that these UIs and interactions are merely illustrative and that in practice a variety of interface designs and interaction styles may exist. Additionally, the controls included in these UIs may be implemented using any currently known or later developed UI elements and techniques. Further, the type, form, manner of operation, layout, arrangement in the UI, etc. of these controls are illustrative and not intended to limit the scope of the present disclosure in any way.
FIG. 2 illustrates a flow diagram of an image rendering process 200 according to some embodiments of the present disclosure. Process 200 may be implemented at terminal device 110 or editing device 120 or at any suitable device. For ease of discussion, process 200 will be described below with reference to editing device 120 of FIG. 1 in conjunction with FIG. 3. However, it should be understood that in other embodiments, process 200 may be implemented by terminal device 110 or by an electronic device that integrates terminal device 110 or editing device 120. Embodiments of the present disclosure are not limited in this respect.
As previously mentioned, the AR effect may include a background effect, a foreground effect, and the like, depending on the category of the AR effect. Taking the image 122 in fig. 1 as an example, the background effects 131 to 133 and 141 to 145 are presented in the background area in the image, and different background effects may be rendered for different background areas, while a placement-like effect such as a box 106 is presented in the foreground area. In general, the background effect does not obscure or cover the foreground effect. In addition, since the human object 104 exists in the image, the occlusion relationship of the human object 104 and the foreground effect needs to be considered.
In order to render multiple AR effects in the same image and avoid different AR effects from overlapping each other or the character object being unnecessarily occluded by the AR effects, in process 200, segmentation-based rendering may be performed under a template test, and then post-processing may be performed for the foreground effects and the character object.
In an embodiment of the present disclosure, the term "stencil buffer" refers to a data buffering technique in three-dimensional drawing, in which a numerical value of one byte length, hereinafter also referred to as a stencil buffer value, may be allocated for each pixel in units of pixels. Stencil buffers may be used to limit the area of rendering. The terms "stencil test" and "post-processing" refer to different stages in the rendering pipeline. In the template test phase, the slice elements of the picture are set to unsigned integer values, which are also referred to as reference values. The reference value of the fragment is then compared to the value of the stencil buffer. If the comparison passes, the color value of the corresponding fragment is updated, and if the comparison does not pass, the color value of the fragment is not updated. The post-processing refers to processing such as ambient light occlusion, depth of field, motion blur, and the like for an image after all rendering is completed.
In block 210, the editing device 120 segments the captured image 102 into a plurality of regions corresponding to a plurality of AR categories each having a corresponding rendering priority.
In some embodiments, editing device 120 may segment the plurality of regions based on image semantics. In the example of fig. 1, by identifying the semantics of the image 102, the sky portion may be segmented into a first region 130 and the ground portion may be segmented into a second region 140, where the first region 130 and the second region 140 correspond to different AR categories, i.e., a first AR category and a second AR category, respectively. Different AR categories may have corresponding AR effects. For example, a first AR class has first AR effects 131-133 for rendering the sky, and a second AR class has second AR effects 141-145 for rendering the ground.
In block 220, the editing device 120 renders the plurality of regions based on the rendering priorities such that in rendering the first AR effect 131-133 for a first region 130 of the plurality of regions, rendering a second region 140 of the plurality of regions is avoided, wherein a first AR category associated with the first region 130 is different from a second AR category associated with the second region.
In some embodiments, editing device 120 may employ stencil testing to render multiple regions of image 102. In this way, segmentation class rendering of images may be achieved, where the AR effect corresponding to each AR category is rendered only in the respective segmented region, and not in other regions.
In some embodiments, editing device 120 may create a corresponding AR node for each AR category, and each AR node may be associated with at least one model object. The model object may be rendered within the corresponding segmented region and may not be rendered outside the segmented region.
Fig. 3 illustrates a schematic diagram of a user interface 300 for image rendering, according to some embodiments of the present disclosure. The user interface 300 may be an editing interface on the editing device 120 and includes a visualization authoring area 310 for presenting a stencil buffer corresponding to the image 102, an area 312 for presenting a preview of the image, an area 320 for presenting special effects information, an area 330 for presenting parameter information for special effects, and an area 340 for presenting model rendering information. As shown in fig. 3, for a first AR category and a second AR category, the editing device 120 creates two AR nodes, where the first AR node is an "AR segmentation class node: sky "and the second AR node is" AR partition class node: ground ".
In some embodiments, the editing device 120 may designate a stencil buffer corresponding to the first area 130 as a first reference value, where the first reference value is indicative of a first AR category and is different from reference values designated for other areas in the image 102 other than the first area. Editing device 120 may render first AR effects 131-133 for first region 130 using a first renderer associated with the first AR category, the first renderer rendering first AR effects 131-133 only for regions having the first reference value.
As shown in area 310 in fig. 3, the editing apparatus 120 may designate a stencil buffer corresponding to the first area 130 as a first reference value (e.g., 1), while other areas are designated as different reference values. The first renderer may render the first AR effects 131 to 133 based on the first reference value while avoiding rendering the second reference value.
An exemplary implementation of the first renderer is given below by taking the first region 130 as an example. In this implementation, the Mesh component (Mesh) may take a plane and the off-depth test and depth writing are turned off so that the first renderer is only used to fill the template without affecting other objects in the image 102. After the stencil test is turned on, the operation after the stencil test is passed may be set to fill the corresponding region according to the image segmentation, for example, set to "place". The rule that the stencil test passes can be set to "always pass the stencil test". Further, the reference value of the template test corresponding to the first region 130 is designated as a reference value distinguished from other divided regions. Additionally, since no write color buffering is required, colorWriteMask for mixed mode may be set to 0. It will be appreciated that the above arrangement is merely an example and that in practice any other suitable arrangement may be employed. Accordingly, embodiments of the disclosure are not limited in this respect.
In some embodiments, the first AR category may be associated with at least one model object. In response to one of the at least one model object being located in the first region 130, the editing device 120 may render the model object with the first AR effect. In the example of fig. 3, two AR nodes have models 11 to 1N and 22 to 2N, respectively, and accordingly, the models 11 to 1N are rendered only in the first region 130 and cannot be rendered in the second region 140.
Additionally or alternatively, in some embodiments, different AR nodes may employ respective shaders.
In some embodiments, editing device 120 may recursively traverse all model objects with renderers of the first AR node. As an exemplary implementation, after the template test is started, the reference value of the template test of the corresponding model object is set as the reference value of the current AR node, i.e., the first reference value. Further, the operation after the template test passes may be set to "keep", and the rule that the template test passes is set not to render the part of the image that is not segmented, for example, may be set to "equivalent". Of course, in practice, any other suitable arrangement may be employed as desired.
In some embodiments, corresponding AR nodes may be set for all segmented regions of the image 102 in a similar manner as the first region 130. As an example, in the example shown in fig. 3, the editing apparatus 120 may designate a stencil buffer corresponding to the second area 140 as a second reference value (i.e., 0). The second reference value is used to indicate a second AR category and is different from the reference value specified by the other region in the image 102 except for the second region 140, for example, the reference value 0 of the second region 140 is different from the reference value 1 of the first region 130. Since the rendering priority of the first AR category is higher than the rendering priority of the second AR category, according to an embodiment of the present disclosure, the editing device 120 may render a second AR effect for the second area 140 on the image for which the first area has been rendered using a second renderer associated with the second AR category, and the second renderer may render the second AR effect only for the area having the second reference value. Therefore, the divided areas of the image are rendered in sequence according to the rendering priority, so that redundant data can be avoided, the storage space is saved, and the rendering efficiency is improved.
As shown in fig. 3, user interface 300 also includes controls 321 and 322 for selecting or adding AR nodes, respectively. It should be appreciated that the parameters and settings, interface layouts, and controls shown in interface 300 are merely exemplary, and that in actual practice, any suitable interface may be employed.
Therefore, according to the embodiment of the present disclosure, a specific AR effect may be added to a corresponding area of an image by creating a new AR node in making a rendering effect or in rendering an image. Moreover, the newly created AR effect does not affect the existing AR effect. Compared with the traditional post-processing mode, the image rendering effect is enhanced, and the special effect manufacturing process is simplified.
In some embodiments, the rendering priority may be determined based on a scene tree of the image 102 for a plurality of AR categories. The scene tree may be determined, for example, based on predetermined rules, or alternatively, predefined by a user. For example, the rendering priority may be in order of sky, building, ceiling, ground, character objects, virtual object objects. By determining the rendering order of different AR partition class nodes according to the rendering priority, the effect rendering in one area can be prevented from unnecessarily covering or blocking the effect rendering in another area.
After completing the segmentation class rendering, the editing device 120 may render foreground effects and/or character objects for the image 102. In some embodiments, after rendering the plurality of regions 130 and 140, the editing device 120 may render at least one foreground effect in the image 102 based on post-processing. As an example, the foreground effect 106 may correspond to a third AR category, e.g., "AR placement class node" shown in fig. 3. The AR placement class node may have at least one model 31 to 3N. In other words, in embodiments of the present disclosure, template testing may be employed for segmentation class nodes, while conventional rendering processing is employed for other nodes, such as foreground effects.
In some embodiments, after rendering the foreground effect, the editing device 120 may render the character object 104 in the image 102 based on the predetermined presentation relationship. As an exemplary implementation, editing device 120 may scratch out character object 104 and post-process it. In some embodiments, the person object 104 may correspond to the "AR segmentation class node illustrated in fig. 3: figure ", but no template test is adopted for AR segmentation class nodes of the figure object.
As an exemplary implementation, the renderer of the AR segmentation class node for the human object 104 may adopt the following settings: the Mesh component (Mesh) may take the form of a flat surface with textures set as post-processing for matting out the character object 104. In addition, the currently rendered texture (e.g., FBO) and the input texture may be blended according to the human object segmentation mask to preserve portions of the human object 104. It will be appreciated that the above arrangement is merely an example and that in practice any other suitable arrangement may be employed. Accordingly, embodiments of the disclosure are not limited in this regard.
In some embodiments, the predetermined presentation relationship may include at least one of: the human object occludes 104 at least a portion of the at least one foreground effect, the at least one foreground effect occludes at least a portion of the human object 104, the human object 104 is between multiple foreground effects, and so on. For example, character object 104 may be presented in front of foreground effect 106, occluded at least in part by foreground effect 106, or between multiple foreground effects, as desired.
In some embodiments, the editing device 120 may employ different cameras to track close-range objects and far-range objects. As an example implementation, for distant objects, e.g., sky, buildings, etc., editing device 120 may employ a first camera to track the rotation of the object, and the first camera may use a sensor, e.g., of an electronic device. In this case, tracking of distant objects may not require the use of algorithms. For close-range objects, such as the ground, ceiling, placement-like virtual objects, etc., the editing device 120 may employ a second camera to track the rotation and position of the object, and the second camera may be controlled using a synchronized positioning and mapping (SLAM) based technique.
According to the embodiment of the disclosure, segmentation class rendering is performed under a template test for the segmentation region of the image, and a post-processing mode is adopted for the foreground effect and the character object. In this way, multiple AR effects can be rendered in the same image according to different AR categories while avoiding overlap between the AR effects and unnecessary occlusion of character objects by the AR effects. Therefore, the manufacturing process of the AR special effect can be simplified, and the image rendering efficiency is improved.
Fig. 4 shows a schematic block diagram of an apparatus 400 for image rendering according to some embodiments of the present disclosure. Apparatus 400 may be implemented as or included in terminal device 110 or editing device 120. The various modules/components in apparatus 400 may be implemented by hardware, software, firmware, or any combination thereof.
As shown, the apparatus 400 includes an image segmentation module 410 configured to segment a captured image into a plurality of regions, the plurality of regions corresponding to a plurality of augmented reality AR categories, the plurality of AR categories each having a corresponding rendering priority. The apparatus 400 further includes an image rendering module 420 configured to render the plurality of regions based on the rendering priority such that a second region of the plurality of regions is avoided from being rendered during rendering of the first AR effect for a first region of the plurality of regions, wherein a first AR category associated with the first region is different from a second AR category associated with the second region.
In some embodiments, the image rendering module 420 includes: a first stencil buffer module configured to designate a stencil buffer corresponding to the first area as a first reference value indicating a first AR category and different from reference values designated by other areas in the image except the first area; and a first region rendering module configured to render a first AR effect for the first region using a first renderer associated with the first AR category, the first renderer rendering the first AR effect only for regions having the first reference value.
In some embodiments, the image rendering module 420 includes: a second stencil buffer module configured to designate a stencil buffer corresponding to the second area as a second reference value indicating a second AR category and different from reference values designated by areas other than the second area in the image; and a second region rendering module configured to render a second AR effect for the second region on the search image for which the first region has been rendered using a second renderer associated with a second AR category, the second renderer rendering the second AR effect only for regions having a second reference value, wherein a rendering priority of the first AR category is higher than a rendering priority of the second AR category.
In some embodiments, multiple regions of an image are rendered in a stencil test of a rendering pipeline.
In some embodiments, the first AR category is associated with at least one model object, and wherein the image rendering module 420 comprises: a model object rendering module configured to render the model object with the first AR effect in response to one of the at least one model object being located in the first region.
In some embodiments, the rendering priority is determined based on a scene tree of the image.
In some embodiments, the apparatus 400 further comprises: a foreground rendering module configured to render at least one foreground effect in the image based on post-processing after rendering the plurality of regions.
In some embodiments, the apparatus 400 further comprises: a character object rendering module configured to render a character object in the image based on a predetermined presentation relationship after rendering the foreground effect, wherein the predetermined presentation relationship includes at least one of: the character object occludes at least a portion of the at least one foreground effect, the at least one foreground effect occludes at least a portion of the character object, the character object being between the plurality of foreground effects.
FIG. 5 illustrates a block diagram that shows an electronic device 500 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 500 illustrated in FIG. 5 is merely exemplary and should not be construed as limiting in any way the functionality and scope of the embodiments described herein. The electronic device 500 shown in fig. 5 may be used to implement the terminal device 110 or the editing device 120 of fig. 1.
As shown in fig. 5, the electronic device 500 is in the form of a general-purpose electronic device. The components of the electronic device 500 may include, but are not limited to, one or more processors or processing units 510, memory 520, storage 530, one or more communication units 540, one or more input devices 550, and one or more output devices 560. The processing unit 510 may be a real or virtual processor and may be capable of performing various processes according to programs stored in the memory 520. In a multi-processor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of the electronic device 500.
Electronic device 500 typically includes a variety of computer storage media. Such media may be any available media that is accessible by electronic device 500 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. Memory 520 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage 530 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a diskette, or any other medium, which may be capable of being used to store information and/or data (e.g., training data for training) and which may be accessed within electronic device 500.
The electronic device 500 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 520 may include a computer program product 525 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 540 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of the electronic device 500 may be implemented in a single computing cluster or multiple computing machines, which are capable of communicating over a communications connection. Thus, the electronic device 500 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
Input device 550 may be one or more input devices such as a mouse, keyboard, trackball, or the like. Output device 560 may be one or more output devices such as a display, speakers, printer, or the like. Electronic device 500 may also communicate with one or more external devices (not shown), such as a storage device, a display device, etc., communicating with one or more devices that enable a user to interact with electronic device 500, or communicating with any devices (e.g., network cards, modems, etc.) that enable electronic device 500 to communicate with one or more other electronic devices, as desired, via communication unit 540. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions is provided, wherein the computer-executable instructions are executed by a processor to implement the above-described method. According to an exemplary implementation of the present disclosure, there is also provided a computer program product, tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions, which are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of the implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the various implementations disclosed herein.

Claims (14)

1. An image rendering method, comprising:
segmenting the captured image into a plurality of regions, the plurality of regions corresponding to a plurality of augmented reality classes, the plurality of augmented reality classes each having a corresponding rendering priority; and
rendering the plurality of regions based on the rendering priorities such that rendering a second region of the plurality of regions is avoided in rendering a first augmented reality effect for a first region of the plurality of regions, wherein a first augmented reality category associated with the first region is different from a second augmented reality category associated with the second region.
2. The method of claim 1, wherein rendering the plurality of regions based on the rendering priority comprises:
designating a stencil buffer corresponding to the first region as a first reference value, the first reference value being indicative of the first augmented reality class and being different from reference values designated by other regions in the image other than the first region; and
rendering the first augmented reality effect for the first region with a first renderer associated with the first augmented reality category, the first renderer rendering the first augmented reality effect only for regions having the first reference value.
3. The method of claim 2, wherein rendering the plurality of regions based on the rendering priority comprises:
designating a stencil buffer corresponding to the second region as a second reference value, the second reference value being indicative of the second augmented reality class and being different from reference values designated for other regions in the image other than the second region; and
rendering the second augmented reality effect for the second region on the image for which the first region has been rendered with a second renderer associated with the second augmented reality class, the second renderer rendering the second augmented reality effect only for regions having the second reference value, wherein the rendering priority of the first augmented reality class is higher than the rendering priority of the second augmented reality class.
4. The method of claim 1, wherein the first augmented reality category is associated with at least one model object, and wherein rendering the plurality of regions comprises:
rendering the model object with the first augmented reality effect in response to one of the at least one model object being located in the first region.
5. The method of claim 1, wherein the rendering priority is determined based on a scene tree of the image.
6. The method of claim 1, wherein the plurality of regions of the image are rendered in a stencil test of a rendering pipeline.
7. The method of claim 1, further comprising:
after rendering the plurality of regions, rendering at least one foreground effect in the image based on post-processing.
8. The method of claim 7, further comprising:
rendering a character object in the image based on a predetermined presentation relationship after rendering the foreground effect,
wherein the predetermined presentation relationship comprises at least one of:
the human object occludes at least a portion of the at least one foreground effect,
the at least one foreground effect occludes at least a portion of the human object,
the character object is between a plurality of foreground effects.
9. An apparatus for image rendering, comprising:
an image segmentation module configured to segment a captured image into a plurality of regions, the plurality of regions corresponding to a plurality of augmented reality categories, the plurality of augmented reality categories each having a corresponding rendering priority; and
an image rendering module configured to render the plurality of regions based on the rendering priorities such that rendering a second region of the plurality of regions is avoided in rendering a first augmented reality effect for a first region of the plurality of regions, wherein a first augmented reality category associated with the first region is different from a second augmented reality category associated with the second region.
10. The apparatus of claim 9, wherein the image rendering module comprises:
a first stencil buffer module configured to designate a stencil buffer corresponding to the first region as a first reference value indicating the first augmented reality class and different from reference values designated by other regions in the image except the first region; and
a first region rendering module configured to render the first augmented reality effect for the first region using a first renderer associated with the first augmented reality category, the first renderer rendering the first augmented reality effect only for regions having the first reference value.
11. The apparatus of claim 10, wherein the image rendering module comprises:
a second stencil buffer module configured to designate a stencil buffer corresponding to the second region as a second reference value indicating the second augmented reality class and different from reference values designated by regions other than the second region in the image; and
a second region rendering module configured to render the second augmented reality effect for the second region on the image for which the first region has been rendered using a second renderer associated with the second augmented reality category, the second renderer rendering the second augmented reality effect only for regions having the second reference value, wherein a rendering priority of the first augmented reality category is higher than a rendering priority of the second augmented reality category.
12. The apparatus of claim 9, wherein the first augmented reality class is associated with at least one model object, and wherein the image rendering module comprises:
a model object rendering module configured to render the model object with the first augmented reality effect in response to one of the at least one model object being located in the first region.
13. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the electronic device to perform the method of any of claims 1-8.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202210952649.8A 2022-08-09 2022-08-09 Method, apparatus, device and storage medium for image rendering Pending CN115311397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210952649.8A CN115311397A (en) 2022-08-09 2022-08-09 Method, apparatus, device and storage medium for image rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210952649.8A CN115311397A (en) 2022-08-09 2022-08-09 Method, apparatus, device and storage medium for image rendering

Publications (1)

Publication Number Publication Date
CN115311397A true CN115311397A (en) 2022-11-08

Family

ID=83860167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210952649.8A Pending CN115311397A (en) 2022-08-09 2022-08-09 Method, apparatus, device and storage medium for image rendering

Country Status (1)

Country Link
CN (1) CN115311397A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058292A (en) * 2023-07-28 2023-11-14 北京透彻未来科技有限公司 Tone scale map rendering system based on digital pathological image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076345A (en) * 2016-11-09 2018-05-25 阿里巴巴集团控股有限公司 The coding method of multi-angle video frame, transmission method, device, computer
US20190088005A1 (en) * 2018-11-15 2019-03-21 Intel Corporation Lightweight View Dependent Rendering System for Mobile Devices
CN109872379A (en) * 2017-12-05 2019-06-11 富士通株式会社 Data processing equipment and method
CN109952135A (en) * 2016-09-30 2019-06-28 索尼互动娱乐股份有限公司 Wireless head-band display with difference rendering and sound positioning
CN110807787A (en) * 2019-11-11 2020-02-18 四川航天神坤科技有限公司 Method and system for extracting skyline
CN113034665A (en) * 2021-01-19 2021-06-25 中电普信(北京)科技发展有限公司 Simulation modeling method and system based on scene tree
CN113362450A (en) * 2021-06-02 2021-09-07 聚好看科技股份有限公司 Three-dimensional reconstruction method, device and system
CN114419241A (en) * 2022-01-18 2022-04-29 北京世纪高通科技有限公司 Three-dimensional model construction method and device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109952135A (en) * 2016-09-30 2019-06-28 索尼互动娱乐股份有限公司 Wireless head-band display with difference rendering and sound positioning
CN108076345A (en) * 2016-11-09 2018-05-25 阿里巴巴集团控股有限公司 The coding method of multi-angle video frame, transmission method, device, computer
CN109872379A (en) * 2017-12-05 2019-06-11 富士通株式会社 Data processing equipment and method
US20190088005A1 (en) * 2018-11-15 2019-03-21 Intel Corporation Lightweight View Dependent Rendering System for Mobile Devices
CN110807787A (en) * 2019-11-11 2020-02-18 四川航天神坤科技有限公司 Method and system for extracting skyline
CN113034665A (en) * 2021-01-19 2021-06-25 中电普信(北京)科技发展有限公司 Simulation modeling method and system based on scene tree
CN113362450A (en) * 2021-06-02 2021-09-07 聚好看科技股份有限公司 Three-dimensional reconstruction method, device and system
CN114419241A (en) * 2022-01-18 2022-04-29 北京世纪高通科技有限公司 Three-dimensional model construction method and device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058292A (en) * 2023-07-28 2023-11-14 北京透彻未来科技有限公司 Tone scale map rendering system based on digital pathological image
CN117058292B (en) * 2023-07-28 2024-04-26 北京透彻未来科技有限公司 Tone scale map rendering system based on digital pathological image

Similar Documents

Publication Publication Date Title
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
US11270492B2 (en) Graphics processing systems
EP3735677A1 (en) Fusing, texturing, and rendering views of dynamic three-dimensional models
CN108939556B (en) Screenshot method and device based on game platform
US9311756B2 (en) Image group processing and visualization
KR20070086037A (en) Method for inter-scene transitions
US11044398B2 (en) Panoramic light field capture, processing, and display
US11475636B2 (en) Augmented reality and virtual reality engine for virtual desktop infrastucture
WO2014117559A1 (en) 3d-rendering method and device for logical window
CN112652046A (en) Game picture generation method, device, equipment and storage medium
CN115311397A (en) Method, apparatus, device and storage medium for image rendering
US11770551B2 (en) Object pose estimation and tracking using machine learning
WO2024060949A1 (en) Method and apparatus for augmented reality, device, and storage medium
US10212406B2 (en) Image generation of a three-dimensional scene using multiple focal lengths
CN117201883A (en) Method, apparatus, device and storage medium for image editing
CN115617221A (en) Presentation method, apparatus, device and storage medium
CN111460770B (en) Method, device, equipment and storage medium for synchronizing element attributes in document
JP2015200918A (en) Image processor, method, and program
JP2023522370A (en) Image display method, device, equipment and storage medium
CN113947671A (en) Panoramic 360-degree image segmentation and synthesis method, system and medium
CN110766599B (en) Method and system for preventing white screen from appearing when Qt Quick is used for drawing image
CN115174993B (en) Method, apparatus, device and storage medium for video production
CN116302296B (en) Resource preview method, device, equipment and storage medium
US20230148172A1 (en) Augmented reality processing device and method
WO2022135050A1 (en) Rendering method, device, and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination