WO2023142650A1 - Special effect rendering - Google Patents

Special effect rendering Download PDF

Info

Publication number
WO2023142650A1
WO2023142650A1 PCT/CN2022/134990 CN2022134990W WO2023142650A1 WO 2023142650 A1 WO2023142650 A1 WO 2023142650A1 CN 2022134990 W CN2022134990 W CN 2022134990W WO 2023142650 A1 WO2023142650 A1 WO 2023142650A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
model
rendering
object model
scene
Prior art date
Application number
PCT/CN2022/134990
Other languages
French (fr)
Chinese (zh)
Inventor
陶然
冷晨
杨瑞健
赵代平
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023142650A1 publication Critical patent/WO2023142650A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a method and apparatus for special effect rendering, a device, and a storage medium. The method comprises: obtaining a target image, the target image comprising a target object; obtaining a 3D object model, the 3D object model comprising a target area; and on the basis of depth information of the 3D object model and color information of a non-target area except the target area on the 3D object model, rendering the 3D object model on the target image to obtain a rendering result image, wherein the target object is displayed in the target area.

Description

特效渲染special effects rendering 技术领域technical field
本公开涉及特效渲染技术领域,尤其涉及用于特效渲染的方法、装置、设备及存储介质。The present disclosure relates to the technical field of special effect rendering, and in particular, to a method, device, device and storage medium for special effect rendering.
背景技术Background technique
目前,在对图像进行特效渲染时,一般是将一些特效或者渲染素材渲染到图像上。例如,拍照或者拍摄短视频过程中,常常会在照片或者短视频中的人脸增加贴纸特效,或者对人脸渲染美颜特效等。然而,这种特效渲染方式比较单一,缺乏趣味性。At present, when performing special effect rendering on an image, some special effects or rendering materials are generally rendered on the image. For example, in the process of taking pictures or shooting short videos, special effects of stickers are often added to the faces in the photos or short videos, or special effects of beautification are rendered on the faces. However, this special effect rendering method is relatively simple and lacks interest.
发明内容Contents of the invention
为克服相关技术中存在的问题,本公开提供了一种用于特效渲染的方法、装置、设备及存储介质。In order to overcome the problems existing in related technologies, the present disclosure provides a method, device, device and storage medium for special effect rendering.
根据本公开实施例的第一方面,提供一种特效渲染方法,包括:获取目标图像,所述目标图像中包括目标对象;获取3D对象模型,所述3D对象模型包括目标区域;基于所述3D对象模型的深度信息以及所述3D对象模型上除所述目标区域以外的非目标区域的颜色信息,将所述3D对象模型渲染到所述目标图像上以获得渲染结果图像,其中,所述目标对象显示在所述目标区域内。According to the first aspect of an embodiment of the present disclosure, there is provided a special effect rendering method, including: acquiring a target image, the target image including a target object; acquiring a 3D object model, the 3D object model including a target area; based on the 3D Depth information of the object model and color information of non-target areas on the 3D object model except the target area, rendering the 3D object model onto the target image to obtain a rendering result image, wherein the target Objects are displayed within the target area.
在一些实施例中,基于所述3D对象模型的深度信息以及所述3D对象模型上非目标区域的颜色信息,将所述3D对象模型渲染到所述目标图像上,包括:将目标区域的深度信息赋予目标图像上对应于目标区域的像素点,并将所述3D对象模型中所述非目标区域的深度信息和颜色信息赋予目标图像上对应于3D对象模型中所述非目标区域的像素点。In some embodiments, rendering the 3D object model on the target image based on the depth information of the 3D object model and the color information of the non-target area on the 3D object model includes: converting the depth of the target area assigning information to the pixel points corresponding to the target area on the target image, and assigning the depth information and color information of the non-target area in the 3D object model to the pixel points corresponding to the non-target area in the 3D object model on the target image .
基于此,通过将3D对象模型上目标区域的深度信息、3D对象模型上非目标区域的深度信息和颜色信息赋予目标图像上各对应的像素点,可以将3D对象模型渲染于目标图像上,且3D对象模型不会遮住目标对象。使得渲染结果图像中,目标图像中的部分与3D对象模型融合于一张图像中,实现虚拟与现实结合。Based on this, by assigning the depth information of the target area on the 3D object model, the depth information and the color information of the non-target area on the 3D object model to each corresponding pixel on the target image, the 3D object model can be rendered on the target image, and The 3D object model does not obscure the target object. In the rendering result image, the part of the target image and the 3D object model are fused into one image, realizing the combination of virtual and reality.
在一些实施例中,3D对象模型位于预先获取的3D场景模型中,所述方法还包括:基于所述3D场景模型中除覆盖区域以外的非覆盖区域的深度信息和颜色信息,将所述3D场景模型渲染到所述目标图像上以获得所述渲染结果图像,其中,所述覆盖区域为渲染相机视角下被所述3D对象模型中目标区域覆盖的区域。基于此,可以使渲染结果图像既包括3D对象模型,又包括3D场景模型,使得渲染结果更加多样,趣味性更强。In some embodiments, the 3D object model is located in the pre-acquired 3D scene model, and the method further includes: based on the depth information and color information of the non-coverage area in the 3D scene model The scene model is rendered onto the target image to obtain the rendering result image, wherein the coverage area is an area covered by the target area in the 3D object model under the perspective of the rendering camera. Based on this, the rendering result image can include both the 3D object model and the 3D scene model, making the rendering result more diverse and more interesting.
在一些实施例中,3D对象模型包括多个子对象模型,至少一个所述子对象模型沿着对应的运动路径进行运动;和/或,所述3D场景模型包括多个子场景模型,至少一个所述子场景模型沿着对应的运动路径进行运动。In some embodiments, the 3D object model includes a plurality of sub-object models, and at least one of the sub-object models moves along a corresponding motion path; and/or, the 3D scene model includes a plurality of sub-scene models, and at least one of the sub-object models The sub-scene model moves along the corresponding movement path.
基于此,渲染之后的图像中,3D场景模型包括的至少一个子场景模型能在对应的路径上进行运动;和/或3D对象模型中包括至少一个子对象模型能在对应的路径上进行运动。使得呈现出来的渲染结果中包括动态运动的子对象模型和/或子场景模型。Based on this, in the rendered image, at least one sub-scene model included in the 3D scene model can move on a corresponding path; and/or at least one sub-object model included in the 3D object model can move on a corresponding path. A sub-object model and/or a sub-scene model of dynamic motion is included in the rendered rendering result.
在一些实施例中,所述方法还包括:获取所述渲染结果图像的显示范围,将运动超出所述显示范围的所述子对象模型渲染在所述子对象模型对应的运动路径上的起始位置。In some embodiments, the method further includes: acquiring the display range of the rendering result image, and rendering the sub-object model that moves beyond the display range at the start of the motion path corresponding to the sub-object model Location.
将移动出显示范围的子对象模型重新渲染在子对象模型对应的运动路径的起始位置并维持运动,可以使所述子对象模型的运动体现为一种循环的运动。Re-rendering the sub-object model that has moved out of the display range at the starting position of the motion path corresponding to the sub-object model and maintaining the motion can make the motion of the sub-object model manifest as a circular motion.
在一些实施例中,所述方法还包括:获取所述渲染结果图像的显示范围,将运动超出所述显示范围的所述子场景模型渲染在所述子场景模型对应的运动路径上的起始位置。In some embodiments, the method further includes: acquiring the display range of the rendering result image, and rendering the sub-scene model whose motion exceeds the display range at the start of the motion path corresponding to the sub-scene model Location.
将移动出显示范围的所述子场景模型重新渲染在所述子场景模型对应的运动路径的起始位置并维持运动,可以使目标子场景模型的运动体现为一种循环的运动。The sub-scene model that has moved out of the display range is re-rendered at the start position of the motion path corresponding to the sub-scene model and maintained in motion, so that the motion of the target sub-scene model can be embodied as a circular motion.
在一些实施例中,在渲染所述3D对象模型和/或3D场景模型之前,所述方法还包括:获取所述目标对象的朝向;基于所述目标对象的朝向调整所述3D对象模型和/或所述3D场景模型在所述渲染结果图像中的朝向。In some embodiments, before rendering the 3D object model and/or the 3D scene model, the method further includes: acquiring the orientation of the target object; adjusting the 3D object model and/or based on the orientation of the target object Or the orientation of the 3D scene model in the rendering result image.
可以根据目标对象的朝向调整3D对象模型和/或所述3D场景模型的朝向,使得渲染结果中3D对象模型和/或3D场景模型的朝向可以基于目标对象的朝向的不同而不同。The orientation of the 3D object model and/or the 3D scene model may be adjusted according to the orientation of the target object, so that the orientation of the 3D object model and/or the 3D scene model in the rendering result may be different based on the orientation of the target object.
在一些实施例中,在渲染所述3D对象模型和/或3D场景模型之前,所述方法还包括:获取所述目标对象的朝向;基于所述目标对象的朝向调整渲染相机的角度参数;其中,不同角度参数下所述3D对象模型和/或3D场景模型在渲染结果图像上显示的朝向不同。In some embodiments, before rendering the 3D object model and/or 3D scene model, the method further includes: acquiring the orientation of the target object; adjusting the angle parameter of the rendering camera based on the orientation of the target object; wherein , the 3D object model and/or the 3D scene model display different orientations on the rendering result image under different angle parameters.
可以通过改变渲染相机的角度参数的方式改变渲染结果中3D对象模型和/或3D场景模型的朝向,可以根据目标对象的朝向改变渲染相机的角度参数,从而使得渲染结果中3D对象模型和/或3D场景模型的朝向随着渲染相机的角度参数的改变而改变。The orientation of the 3D object model and/or the 3D scene model in the rendering result can be changed by changing the angle parameter of the rendering camera, and the angle parameter of the rendering camera can be changed according to the orientation of the target object, so that the 3D object model and/or The orientation of the 3D scene model changes as the angle parameter of the rendering camera changes.
在一些实施例中,在所述目标图像中包括多个目标对象的情况下,所述获取3D对象模型,包括:获取多个所述3D对象模型,所述3D对象模型与所述目标对象分别对应,每个3D对象模型均包括目标区域。In some embodiments, when the target image includes multiple target objects, the acquiring 3D object models includes: acquiring multiple 3D object models, the 3D object models and the target objects are respectively Correspondingly, each 3D object model includes a target area.
基于此,可以在目标图像中存在多个目标对象的情况下,获取多个3D对象模型,使得渲染的结果中可以包含多个目标对象以及多个3D对象模型,其中多个3D对象模型与多个目标对象对应显示在渲染结果图像上。Based on this, when there are multiple target objects in the target image, multiple 3D object models can be obtained, so that the rendered result can include multiple target objects and multiple 3D object models, wherein the multiple 3D object models are related to the multiple The corresponding target objects are displayed on the rendering result image.
进一步的,获取3D对象模型,包括:对所述目标图像进行识别,得到识别结果;基于所述识别结果从多个所述3D对象模型中分别确定每个所述目标对象对应的3D对象模型。Further, acquiring a 3D object model includes: recognizing the target image to obtain a recognition result; and determining a 3D object model corresponding to each of the target objects from a plurality of 3D object models based on the recognition result.
可以对目标图像进行识别,根据对目标图像的识别结果确定与目标对象对应的3D对象模型,从而确定每个目标对象对应的3D对象模型,使得渲染结果中目标对象与上述对应的3D对象模型组合显示。The target image can be identified, and the 3D object model corresponding to the target object is determined according to the recognition result of the target image, thereby determining the 3D object model corresponding to each target object, so that the target object in the rendering result is combined with the above-mentioned corresponding 3D object model show.
进一步的,在一些实施例中,所述识别结果包括所述目标对象的属性信息,所述3D对象模型与对应的目标对象的属性信息相匹配。Further, in some embodiments, the recognition result includes attribute information of the target object, and the 3D object model matches the corresponding attribute information of the target object.
基于此,可以根据目标对象的属性信息确定与所述属性信息匹配的3D对象模型,使得渲染结果中3D对象模型能与带有与所述3D对象模型匹配的属性信息的目标对象对应显示。Based on this, the 3D object model matching the attribute information can be determined according to the attribute information of the target object, so that the 3D object model in the rendering result can be displayed correspondingly to the target object with the attribute information matching the 3D object model.
在一些实施例中,还可以获取所述目标对象的属性信息;根据所述目标对象的属性信息,确定所述3D场景模型的属性信息;所述基于所述3D场景模型中非覆盖区域的深度信息和颜色信息,将所述3D场景模型渲染到所述目标图像上以获得渲染结果图像,包括:基于所述3D场景模型中非覆盖区域的深度信息和颜色信息以及3D场景模型的属性信息,渲染所述3D场景模型。In some embodiments, the attribute information of the target object may also be obtained; according to the attribute information of the target object, the attribute information of the 3D scene model is determined; the depth of the non-covered area based on the 3D scene model Information and color information, rendering the 3D scene model on the target image to obtain a rendering result image, including: based on the depth information and color information of the non-covered area in the 3D scene model and the attribute information of the 3D scene model, The 3D scene model is rendered.
可以根据目标对象的属性信息确定3D场景模型的属性信息,实现针对目标图像中不同属性信息的目标对象,渲染不同属性信息的3D场景模型。The attribute information of the 3D scene model can be determined according to the attribute information of the target object, and the 3D scene model with different attribute information can be rendered for the target object with different attribute information in the target image.
根据本公开实施例的第二方面,提供一种特效渲染装置,包括:第一获取模块,用于获取目标图像,所述目标图像中包括目标对象;第二获取模块,用于获取3D对象模型,所述3D对象模型包括目标区域;渲染模块,用于基于所述3D对象模型的深度信息以及所述3D对象模型上除所述目标区域以外的非目标区域的颜色信息,将所述3D对象模型渲染到所述目标图像上以获得渲染结果图像,其中,所述目标对象显示在所述 目标区域内。According to a second aspect of an embodiment of the present disclosure, there is provided a special effect rendering device, including: a first acquisition module, configured to acquire a target image, the target image including a target object; a second acquisition module, configured to acquire a 3D object model , the 3D object model includes a target area; a rendering module, configured to render the 3D object based on the depth information of the 3D object model and the color information of non-target areas other than the target area on the 3D object model A model is rendered onto the target image to obtain a rendering result image, wherein the target object is displayed in the target area.
根据本公开实施例的第三方面,提供一种特效渲染设备,包括处理器和用于存储所述处理器可执行的计算机程序指令的存储器。其中,该处理器被配置为通过执行所述计算机程序指令来实现上述任一实施例所述的方法。According to a third aspect of the embodiments of the present disclosure, there is provided a special effect rendering device, including a processor and a memory for storing computer program instructions executable by the processor. Wherein, the processor is configured to implement the method described in any of the above embodiments by executing the computer program instructions.
根据本公开实施例的第四方面,提供一种计算机可读介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述任一实施例所述方法中步骤。According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps in the method described in any of the above-mentioned embodiments are implemented.
本公开的实施例所提供的技术方案,获取包括目标对象的目标图像,并获取在目标图像上的虚拟的3D对象模型。由于渲染后的图像中既包括目标对象,又包括3D对象模型,且3D对象模型不能将目标对象遮挡住,因此,3D对象模型中包括用于显示目标对象的目标区域。进而基于3D对象模型的深度信息以及3D对象模型上除目标区域以外的非目标区域的颜色信息,将3D对象模型渲染到目标图像上以获得渲染结果图像。通过上述方式将3D对象模型渲染于目标图像上,以使现实的目标对象与虚拟的3D对象模型在渲染结果图像中得到融合,提高趣味性和沉浸感,使用户更有身临其境的感觉。In the technical solution provided by the embodiments of the present disclosure, a target image including a target object is obtained, and a virtual 3D object model on the target image is obtained. Since the rendered image includes both the target object and the 3D object model, and the 3D object model cannot block the target object, the 3D object model includes a target area for displaying the target object. Furthermore, based on the depth information of the 3D object model and the color information of the non-target area except the target area on the 3D object model, the 3D object model is rendered onto the target image to obtain a rendering result image. Render the 3D object model on the target image in the above way, so that the real target object and the virtual 3D object model can be integrated in the rendering result image, improve the fun and immersion, and make the user feel more immersive .
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.
附图说明Description of drawings
图1是一示例性实施例提供的特效渲染的方法流程图。Fig. 1 is a flowchart of a method for rendering special effects provided by an exemplary embodiment.
图2是一示例性实施例提供的获取的目标图像的效果图。Fig. 2 is an effect diagram of an acquired target image provided by an exemplary embodiment.
图3是一示例性实施例提供的获取的部分3D对象模型和部分3D场景模型的效果图。Fig. 3 is an effect diagram of an acquired partial 3D object model and partial 3D scene model provided by an exemplary embodiment.
图4是一示例性实施例提供的将3D对象模型和3D场景模型渲染到目标图像上的渲染结果的效果图。Fig. 4 is an effect diagram of a rendering result of rendering a 3D object model and a 3D scene model on a target image provided by an exemplary embodiment.
图5是一示例性实施例提供的将3D场景模型分为多个子场景模型,并且子场景模型处于运动状态的效果图。Fig. 5 is an effect diagram of dividing a 3D scene model into multiple sub-scene models and the sub-scene models are in a moving state provided by an exemplary embodiment.
图6是一示例性实施例提供的目标子场景模型在渲染结果中运动的效果图。Fig. 6 is an effect diagram of a target sub-scene model moving in a rendering result provided by an exemplary embodiment.
图7是一示例性实施例提供的目标对象不同朝向的效果图。Fig. 7 is an effect diagram of different orientations of a target object provided by an exemplary embodiment.
图8是一示例性实施例提供的根据目标对象的朝向调整3D对象模型和3D场景模型的效果图。Fig. 8 is an effect diagram of adjusting a 3D object model and a 3D scene model according to an orientation of a target object provided by an exemplary embodiment.
图9是一示例性实施例提供的根据目标对象的朝向调整渲染相机视角的效果图。Fig. 9 is an effect diagram of adjusting a rendering camera angle of view according to an orientation of a target object provided by an exemplary embodiment.
图10是一示例性实施例提供的特效渲染方法的流程图。Fig. 10 is a flowchart of a special effect rendering method provided by an exemplary embodiment.
图11是一示例性实施例提供的特效渲染装置的结构示意图。Fig. 11 is a schematic structural diagram of a special effect rendering device provided by an exemplary embodiment.
图12是一示例性实施例提供的特效渲染设备的结构示意图。Fig. 12 is a schematic structural diagram of a special effect rendering device provided by an exemplary embodiment.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatuses and methods consistent with aspects of the present disclosure as recited in the appended claims.
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terminology used in the present disclosure is for the purpose of describing particular embodiments only, and is not intended to limit the present disclosure. As used in this disclosure and the appended claims, the singular forms "a", "the", and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items.
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以 被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used in the present disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present disclosure, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word "if" as used herein may be interpreted as "at" or "when" or "in response to a determination."
目前,在对图像进行特效渲染时,一般是将一些特效或者渲染素材渲染到图像中的目标对象上。以目标对象是人脸为例,可以对人脸渲染美颜特效;给人添加装饰如帽子、耳环等贴纸素材;对人脸进行放大、缩小、扭曲等。上述特效渲染方式比较单一,缺乏趣味性。At present, when performing special effects rendering on an image, some special effects or rendering materials are generally rendered to target objects in the image. Taking the target object as a human face as an example, you can render special beautification effects on the human face; add decorations such as hats, earrings and other sticker materials to people; enlarge, reduce, and distort the human face. The above-mentioned special effect rendering method is relatively simple and lacks interest.
针对上述问题,本公开实施例提供一种特效渲染方法,适用于对实时拍摄的图像或视频进行处理,以及对预先拍摄并存储的图像或视频进行处理。例如针对于实时拍摄图像或视频的场景而言,在拍摄时,可以将目标对象(例如用户的脸或者身体等部位)显示在一个预先制作好的虚拟的3D(三维)模型中,从而提高用户的体验感,增强目标对象与3D模型的融入感,让用户更有身临其境的感觉。当然,以上的应用场景,仅用于示意性说明,不应理解为对本公开的实施例方案的限定。In view of the above problems, an embodiment of the present disclosure provides a special effect rendering method, which is suitable for processing images or videos captured in real time, and processing images or videos captured and stored in advance. For example, for the scene of shooting images or videos in real time, when shooting, the target object (such as the user's face or body and other parts) can be displayed in a pre-made virtual 3D (three-dimensional) model, thereby improving the user experience. It enhances the sense of integration between the target object and the 3D model, making users feel more immersive. Of course, the above application scenarios are only used for illustrative illustration, and should not be understood as limiting the embodiments of the present disclosure.
参见图1,图1是本公开实施例提供的一种特效渲染的流程图,包括以下步骤:Referring to FIG. 1, FIG. 1 is a flow chart of special effect rendering provided by an embodiment of the present disclosure, including the following steps:
S102,获取目标图像,所述目标图像中包括目标对象;S102. Acquire a target image, where the target image includes a target object;
S104,获取3D对象模型,所述3D对象模型包括目标区域;S104. Acquire a 3D object model, where the 3D object model includes a target area;
S106,基于所述3D对象模型的深度信息以及所述3D对象模型上除所述目标区域以外的区域(以下也可简称为非目标区域)的颜色信息,将所述3D对象模型渲染到所述目标图像上以获得渲染结果图像,其中,所述目标对象显示在所述目标区域内。S106. Render the 3D object model to the on the target image to obtain a rendering result image, wherein the target object is displayed in the target area.
在S102中,可以是在实时通过摄像头拍摄的过程中获取目标图像,也可以是在预先拍摄好的视频或者图像中获取目标图像。由于拍摄的或存储的视频帧中可能包括,也可能不包括目标对象,因此,可以先确定拍摄的或存储的各个图像中是否包括目标对象,并将包括目标对象的图像确定为目标图像。可以通过对图像进行目标检测来确定图像中是否包括目标对象,也可以基于预先标注的信息确定图像中是否包括目标对象。针对于不包括目标对象的图像,则不进行特效渲染处理。目标对象可以是人脸、人手或人脚等身体部位,也可以是人、动物、植物等活体,还可以是建筑物等非活体。具体可以根据实际场景的需求更改目标对象的类型,本公开实施例对目标对象的类型不做限定。In S102, the target image may be acquired during real-time shooting by the camera, or the target image may be acquired in a pre-shot video or image. Since the captured or stored video frames may or may not include the target object, it may first be determined whether each captured or stored image includes the target object, and the image including the target object is determined as the target image. Whether the target object is included in the image can be determined by performing object detection on the image, or whether the target object is included in the image can be determined based on pre-labeled information. For images that do not include the target object, special effect rendering processing is not performed. The target object can be a body part such as a human face, a human hand or a human foot, or a living body such as a person, an animal, or a plant, or a non-living body such as a building. Specifically, the type of the target object may be changed according to the requirements of the actual scene, and the embodiment of the present disclosure does not limit the type of the target object.
在S104中,可以获取预先制作好的虚拟的3D对象模型,3D对象模型中包括目标区域。在本公开实施例中,3D对象模型可以是动物,如小猪、小猴子等等的3D模型,也可以是一面镜子、一幅画等等的3D模型,具体可以根据实际场景所需要的效果获取预先制作好的不同的3D对象模型。目标区域可以是3D对象模型中的部分区域。例如,3D对象模型中包括的目标区域可以是该3D对象模型中的一个身体部位。例如,3D对象模型是小猪模型的时候,目标区域可以是小猪模型的脸、小猪模型的手或者小猪模型的肚子等等。3D对象模型中包括的目标区域也可以是该3D对象模型中的一个指定范围的区域。例如,3D对象模型是一幅画的模型的时候,目标区域可以是这幅画的中间位置的矩形区域,也可以是画中的某个人或物所包括的范围区域。当然,具体可以根据实际场景所需要的效果设置3D对象模型中的目标区域。本公开实施例在此对3D对象模型以及3D对象模型中的目标区域不做限定。In S104, a prefabricated virtual 3D object model may be obtained, and the 3D object model includes the target area. In the embodiment of the present disclosure, the 3D object model can be an animal, such as a 3D model of a piglet, a little monkey, etc., or a 3D model of a mirror, a painting, etc., depending on the effect required by the actual scene Get pre-made models of different 3D objects. The target area may be a partial area in the 3D object model. For example, a target area included in a 3D object model may be a body part in the 3D object model. For example, when the 3D object model is a piggy model, the target area may be the face of the piggy model, the hands of the piggy model, or the stomach of the piggy model, and so on. The target area included in the 3D object model may also be an area within a specified range in the 3D object model. For example, when the 3D object model is a model of a painting, the target area may be a rectangular area in the middle of the painting, or a range area covered by a certain person or object in the painting. Of course, specifically, the target area in the 3D object model can be set according to the effect required by the actual scene. The embodiment of the present disclosure does not limit the 3D object model and the target area in the 3D object model.
在S106中,可以基于3D对象模型的深度信息以及3D对象模型上非目标区域的颜色信息,将3D对象模型渲染到目标图像上以获得渲染结果图像。在一些实施例中,可以是,将3D对象模型中目标区域的深度信息赋予目标图像上对应于目标区域的像素点,并将所述3D对象模型中非目标区域的深度信息和颜色信息赋予目标图像上对应于3D对象模型中非目标区域的像素点。通过上述方式可以将3D对象模型渲染在目标图像上,并且渲染结果图像中,由于目标图像上与目标区域对应的像素点只赋予了3D对象模型的深度信息,因此渲染上去的3D对象模型并不会遮住目标图像中的目标对象。因此,最终呈现出来的渲染效果即为将真实世界的目标对象融入于虚拟的3D对象模型中。In S106, based on the depth information of the 3D object model and the color information of the non-target area on the 3D object model, the 3D object model can be rendered onto the target image to obtain a rendering result image. In some embodiments, it may be that the depth information of the target area in the 3D object model is assigned to the pixel points corresponding to the target area on the target image, and the depth information and color information of the non-target area in the 3D object model are assigned to the target Pixels on the image that correspond to non-target areas in the 3D object model. Through the above method, the 3D object model can be rendered on the target image, and in the rendered image, since the pixels corresponding to the target area on the target image are only endowed with the depth information of the 3D object model, the rendered 3D object model does not will obscure the target object in the target image. Therefore, the final rendered effect is to integrate the real-world target object into the virtual 3D object model.
在一些实施例中,将3D对象模型渲染于目标图像上时,目标图像上的每个像素点 分别对应于3D对象模型中的点,针对于3D对象模型中非目标区域的像素点集合P1,确定像素点集合P1中每个点的深度信息以及颜色信息,再将像素点集合P1中每个点的深度信息和颜色信息赋予目标图像上相对应的像素点;针对于3D对象模型中目标区域的像素点集合P2,确定像素点P2中每个点的深度信息,将像素点集合P2中每个点的深度信息赋予目标图像上相对应的像素点(可以是组成目标对象的像素点)。基于上述方式,将3D对象模型渲染于目标图像上。当然,在实际实现过程中,还可以是将确定出来的像素点集合P1中各个点的深度信息和/或颜色信息进行一定的变换,将像素点集合P1中各点变换后的深度信息和/或颜色信息赋予目标图像上相对应的像素点;像素点集合P2中的点的深度信息也可以作上述变换。对此本公开实施例不作限定。In some embodiments, when the 3D object model is rendered on the target image, each pixel point on the target image corresponds to a point in the 3D object model, and for the pixel point set P1 of the non-target area in the 3D object model, Determine the depth information and color information of each point in the pixel point set P1, and then assign the depth information and color information of each point in the pixel point set P1 to the corresponding pixel point on the target image; for the target area in the 3D object model The pixel point set P2 of the pixel point set P2, determine the depth information of each point in the pixel point point P2, and assign the depth information of each point in the pixel point set P2 to the corresponding pixel point on the target image (which may be the pixel point forming the target object). Based on the above method, the 3D object model is rendered on the target image. Of course, in the actual implementation process, it is also possible to perform a certain transformation on the depth information and/or color information of each point in the determined pixel point set P1, and transform the depth information and/or color information of each point in the pixel point set P1 Or the color information is given to the corresponding pixel on the target image; the depth information of the points in the pixel set P2 can also be transformed as above. The embodiments of the present disclosure do not limit this.
在一些情况中,为了进一步提高特效渲染结果的丰富程度,除了获取3D对象模型,还可以获取预先制作的虚拟的3D场景模型,3D对象模型位于3D场景模型中。其中,3D对象模型用于对目标对象周围的区域进行渲染,使目标对象最终显示于3D对象模型中的目标区域;3D场景模型用于对目标图像中除被3D对象模型渲染的对象区域以外的区域(以下也可简称为背景区域)进行渲染。In some cases, in order to further improve the richness of special effect rendering results, in addition to obtaining the 3D object model, a pre-fabricated virtual 3D scene model may also be obtained, and the 3D object model is located in the 3D scene model. Wherein, the 3D object model is used for rendering the area around the target object, so that the target object is finally displayed in the target area in the 3D object model; region (hereinafter also referred to as the background region for short) for rendering.
示例性的,渲染3D场景模型的方式可以是:基于3D场景模型中除覆盖区域以外的区域(以下也可简称为非覆盖区域)的深度信息和颜色信息,将3D场景模型渲染到目标图像上以获得渲染结果图像,其中,覆盖区域为渲染相机视角下被3D对象模型中目标区域覆盖的区域。Exemplarily, the manner of rendering the 3D scene model may be: rendering the 3D scene model on the target image based on the depth information and color information of the area other than the covered area (hereinafter also referred to as the non-covered area) in the 3D scene model A rendering result image is obtained, wherein the coverage area is the area covered by the target area in the 3D object model under the perspective of the rendering camera.
在渲染3D场景模型和/或3D对象模型时,存在一个虚拟的渲染相机,能够捕捉到3D对象模型和/或3D场景模型的部分区域,渲染相机捕捉该部分区域的视角即为渲染相机视角。When rendering the 3D scene model and/or the 3D object model, there is a virtual rendering camera capable of capturing a part of the 3D object model and/or the 3D scene model.
由于渲染3D场景模型时,3D场景模型可能会与3D对象模型的目标区域存在重合,该重合会导致3D场景模型遮挡住原先想要显示在目标区域的目标对象。因此,为了防止目标对象被3D场景模型遮挡,在渲染3D场景模型时,可以只渲染3D场景模型中非覆盖区域的深度信息和颜色信息,进而在渲染结果图像中,目标区域中各像素点并未被赋予3D场景模型中对应的像素点的颜色信息,因此渲染结果图像中目标区域依旧显示目标对象。Since the 3D scene model may overlap with the target area of the 3D object model when rendering the 3D scene model, the coincidence will cause the 3D scene model to block the target object originally intended to be displayed in the target area. Therefore, in order to prevent the target object from being occluded by the 3D scene model, when rendering the 3D scene model, only the depth information and color information of the non-covered area in the 3D scene model can be rendered, and then in the rendering result image, each pixel in the target area and The color information of the corresponding pixel in the 3D scene model is not assigned, so the target area in the rendered image still displays the target object.
在渲染相机的视角下,3D对象模型中的目标区域将覆盖3D场景模型中的部分区域。在一个实施例中,在渲染3D场景模型之前,可以剔除掉3D场景模型中被3D对象模型中目标区域覆盖的覆盖区域,再基于3D场景模型中非覆盖区域的深度信息和颜色信息渲染3D场景模型。在另一个实施例中,在渲染3D场景模型之前,并不剔除掉3D场景模型中的覆盖区域,而是基于3D场景模型的深度信息以及3D场景模型中非覆盖区域的颜色信息渲染3D场景模型,即,3D场景模型中的覆盖区域只渲染深度信息,3D场景模型中的非覆盖区域既渲染深度信息又渲染颜色信息。通过上述任一方式,可以使最终所显示的渲染效果为:目标对象显示在3D对象模型(可具体为目标区域)中,3D对象模型显示在3D场景模型(可具体为覆盖区域)中。其中,3D对象模型在3D场景模型中的位置可以根据实际需求设置,本公开实施例在此不做限定。Under the perspective of the rendering camera, the target area in the 3D object model will cover part of the area in the 3D scene model. In one embodiment, before rendering the 3D scene model, the coverage area in the 3D scene model that is covered by the target area in the 3D object model can be eliminated, and then the 3D scene is rendered based on the depth information and color information of the non-coverage area in the 3D scene model Model. In another embodiment, before rendering the 3D scene model, the coverage area in the 3D scene model is not removed, but the 3D scene model is rendered based on the depth information of the 3D scene model and the color information of the non-coverage area in the 3D scene model , that is, the coverage area in the 3D scene model only renders depth information, and the non-coverage area in the 3D scene model renders both depth information and color information. Through any of the above methods, the final displayed rendering effect can be: the target object is displayed in the 3D object model (specifically, the target area), and the 3D object model is displayed in the 3D scene model (specifically, the coverage area). Wherein, the position of the 3D object model in the 3D scene model can be set according to actual requirements, which is not limited in this embodiment of the present disclosure.
在一些实施例中,将3D场景模型渲染于目标图像上时,目标图像上的每个像素点都会对应于3D场景模型中的点。针对于3D场景模型中非覆盖区域的像素点集合Q1,确定像素点集合Q1中每个点的深度信息以及颜色信息,再将像素点集合Q1中每个点的深度信息和颜色信息赋予目标图像上相对应的像素点;针对于3D场景模型中覆盖区域的像素点集合Q2,确定像素点集合Q2中每个点的深度信息,将像素点集合Q2中每个点的深度信息赋予目标图像上相对应的像素点。基于上述方式,将3D场景模型渲染于目标图像上。当然,在实际实现过程中,还可以是将确定出来的像素点集合Q1中各个点的深度信息和/或颜色信息进行一定的变换,将像素点集合Q1中各点变换后的深度信息和/或颜色信息赋予目标图像上相对应的像素点;像素点集合Q2中的点的深度信息 也可以作上述变换。对此本公开实施例不作限定。In some embodiments, when the 3D scene model is rendered on the target image, each pixel on the target image corresponds to a point in the 3D scene model. For the pixel point set Q1 in the non-covered area in the 3D scene model, determine the depth information and color information of each point in the pixel point set Q1, and then assign the depth information and color information of each point in the pixel point set Q1 to the target image The corresponding pixels on the above; for the pixel point set Q2 of the coverage area in the 3D scene model, determine the depth information of each point in the pixel point set Q2, and assign the depth information of each point in the pixel point set Q2 to the target image corresponding pixels. Based on the above method, the 3D scene model is rendered on the target image. Of course, in the actual implementation process, the depth information and/or color information of each point in the determined pixel point set Q1 may also be transformed to a certain extent, and the transformed depth information and/or color information of each point in the pixel point set Q1 Or color information is assigned to the corresponding pixel on the target image; the depth information of the points in the pixel set Q2 can also be transformed as above. The embodiments of the present disclosure do not limit this.
在一个具体的实施例中,图2为获取的目标图像,其中目标对象为女孩的脸。图3为获取的3D对象模型的一部分以及3D场景模型的一部分,其中,3D对象模型301为图中带有人脸模型的小动物的模型;3D对象模型中的目标区域302为图中人脸模型的区域;3D场景模型303包括了除3D对象模型301以外其它的所有模型。将图3中3D对象模型301和3D场景模型303渲染到图2中目标图像的渲染结果图像的效果图如图4所示,目标对象(图2中女孩的脸)显示在渲染结果图像中,而原目标图像中的其它区域都被3D对象模型和3D场景模型的特效所覆盖。可见,通过上述特效渲染方法,可以将目标对象融入到预先制作好的3D对象模型和3D场景模型中,使用户更有身临其境的感觉,趣味性和沉浸感更强。在另一个实施例中,也可以获取图3中所包含的3D对象模型,将该3D对象模型渲染到图2中目标图像中的效果即为:目标对象显示在预先制作好的3D对象模型中。对此,本公开不再进行赘述。In a specific embodiment, FIG. 2 is an acquired target image, where the target object is a girl's face. Fig. 3 is a part of the acquired 3D object model and a part of the 3D scene model, wherein, the 3D object model 301 is a model of a small animal with a human face model in the figure; the target area 302 in the 3D object model is the human face model in the figure area; the 3D scene model 303 includes all other models except the 3D object model 301 . 3D object model 301 and 3D scene model 303 in Fig. 3 are rendered to the effect diagram of the rendering result image of the target image in Fig. 2 as shown in Fig. 4, and the target object (the face of the girl in Fig. 2) is displayed in the rendering result image, Other areas in the original target image are covered by the special effects of the 3D object model and the 3D scene model. It can be seen that through the above-mentioned special effect rendering method, the target object can be integrated into the pre-made 3D object model and 3D scene model, so that the user has a more immersive feeling, and a stronger sense of fun and immersion. In another embodiment, the 3D object model contained in FIG. 3 can also be obtained, and the effect of rendering the 3D object model into the target image in FIG. 2 is: the target object is displayed in the pre-made 3D object model . In this regard, the present disclosure will not repeat them any further.
将目标对象融入到3D对象模型和/或3D场景模型中之后,为了进一步增强3D模型的趣味性,在本公开的一些实施例中,还可以预先设置3D对象模型的运动动画和/或3D场景模型的运动动画。在3D建模时,可以按照实际需求将3D对象模型和3D场景模型进行分块,例如将3D对象模型分为多个子对象模型,每个子对象模型可以是3D对象模型上的部分区域;和/或将3D场景模型分为多个子场景模型,每个子场景模型可以是3D场景模型上的部分区域。并给一个或多个子对象模型和/或一个或多个子场景模型设置在各自运动路径上的运动动画,渲染模型之后,每个子对象模型和/或子场景模型沿着各自对应的运动路径进行运动,即可实现将目标对象融入运动的3D对象模型和/或3D场景模型中的效果。当然,根据实际应用的需求,可以只设置3D对象模型的运动动画,而不设置3D场景模型的运动动画;或者只设置3D场景模型的运动动画,而不设置3D对象模型的运动动画,本公开在此不做限定。After the target object is integrated into the 3D object model and/or the 3D scene model, in order to further enhance the interest of the 3D model, in some embodiments of the present disclosure, the motion animation of the 3D object model and/or the 3D scene can also be preset Motion animation of the model. During 3D modeling, the 3D object model and the 3D scene model can be divided into blocks according to actual needs, for example, the 3D object model can be divided into multiple sub-object models, and each sub-object model can be a part of the 3D object model; and/ Or divide the 3D scene model into multiple sub-scene models, and each sub-scene model may be a partial area on the 3D scene model. And set motion animation on one or more sub-object models and/or one or more sub-scene models on their respective motion paths, after rendering the models, each sub-object model and/or sub-scene model moves along their corresponding motion paths , the effect of integrating the target object into the moving 3D object model and/or 3D scene model can be achieved. Of course, according to actual application requirements, only the motion animation of the 3D object model can be set without setting the motion animation of the 3D scene model; or only the motion animation of the 3D scene model can be set without the motion animation of the 3D object model. It is not limited here.
其中,目标子对象模型的运动路径上包括起始位置,目标子对象模型的运动路径上的所述起始位置也可以根据实际需求进行设置。当目标子对象模型运动超出渲染结果图像的显示范围时,可以将运动超出渲染结果图像显示范围的目标子对象模型重新渲染在目标子对象模型运动路径的起始位置,以继续维持目标子对象模型的运动。Wherein, the movement path of the target sub-object model includes an initial position, and the initial position on the movement path of the target sub-object model can also be set according to actual requirements. When the target sub-object model moves beyond the display range of the rendered result image, the target sub-object model whose motion exceeds the display range of the rendered result image can be re-rendered at the starting position of the target sub-object model's motion path to continue to maintain the target sub-object model exercise.
目标子对象模型的运动路径上还可以存在多个起始位置,则此时若目标子对象模型运动超出渲染结果图像的显示范围,在重新渲染目标子对象模型的位置时,可以随机选择多个起始位置中的任意一个起始位置,也可以根据预先设置好的规则选择多个起始位置中的一个起始位置。而且,目标子对象模型在运动时,运动速度和/或运动路径可以根据实际需求而设置,目标子对象模型在运动时,还可以根据预先设置好的规则改变目标子对象模型的运动速度和/或运动路径;而且,目标子对象模型的运动路径上的起始位置也可以随着目标子对象模型的运动路径的改变而改变。基于此,使得目标子对象模型可以在渲染结果图像上显示时,进行一定的循环运动,以提高趣味性。There can also be multiple starting positions on the motion path of the target sub-object model. At this time, if the target sub-object model moves beyond the display range of the rendering result image, when re-rendering the position of the target sub-object model, multiple starting positions can be randomly selected. Any one of the starting positions, or one of the multiple starting positions may be selected according to preset rules. Moreover, when the target sub-object model is moving, the moving speed and/or moving path can be set according to actual needs, and when the target sub-object model is moving, the moving speed and/or moving path of the target sub-object model can also be changed according to preset rules. or motion path; moreover, the starting position on the motion path of the target sub-object model can also be changed along with the change of the motion path of the target sub-object model. Based on this, when the target sub-object model is displayed on the rendering result image, it can perform a certain circular motion to improve the interest.
目标子场景模型的运动路径上包括起始位置,目标子场景模型的运动路径上的起始位置也可以根据实际需求进行设置。当目标子场景模型运动超出渲染结果图像的显示范围时,可以将运动超出渲染结果图像显示范围的目标子场景模型重新渲染在目标子场景模型运动路径的起始位置,以维持目标子场景模型的运动。The motion path of the target sub-scene model includes a starting position, and the starting position on the motion path of the target sub-scene model can also be set according to actual requirements. When the movement of the target sub-scene model exceeds the display range of the rendering result image, the target sub-scene model whose movement exceeds the display range of the rendering result image can be re-rendered at the starting position of the movement path of the target sub-scene model to maintain the target sub-scene model sports.
目标子场景模型的运动路径上还可以存在多个起始位置,则此时若目标子场景模型运动超出渲染结果图像的显示范围,需要重新渲染目标子场景模型的位置时,可以随机选择多个起始位置中的任意一个起始位置,也可以根据预先设置好的规则选择多个起始位置中的一个起始位置。而且,目标子场景模型在运动时,运动速度和/或运动路径可以根据实际需求而设置,目标子场景模型在运动时,还可以根据预先设置好的规则改变目标子场景模型的运动速度和/或运动路径;而且,目标子场景模型的运动路径上的起始位置也可以随着目标子场景模型的运动路径的改变而改变。基于此,可以使目标子场景模 型在渲染结果图像上显示时,进行一定的循环运动,以提高趣味性。本公开实施例只是示例性地举出几种实施方式以及实施方式的组合,本公开实施例在此处并非作为限定。There can also be multiple starting positions on the motion path of the target sub-scene model. At this time, if the target sub-scene model moves beyond the display range of the rendering result image and the position of the target sub-scene model needs to be re-rendered, multiple starting positions can be randomly selected. Any one of the starting positions, or one of the multiple starting positions may be selected according to preset rules. Moreover, when the target sub-scene model is in motion, the motion speed and/or motion path can be set according to actual needs, and when the target sub-scene model is in motion, the motion speed and/or motion path of the target sub-scene model can also be changed according to preset rules. or the motion path; moreover, the starting position on the motion path of the target sub-scene model may also change as the motion path of the target sub-scene model changes. Based on this, when the target sub-scene model is displayed on the rendering result image, it can perform a certain circular motion to improve the interest. The embodiments of the present disclosure are only examples of several implementations and combinations of implementations, and the embodiments of the disclosure are not intended to be limiting.
目标子对象模型和/或目标子场景模型在运动的过程中超出渲染结果图像的显示范围可以是目标子对象模型和/或目标子场景模型的整个块都超出了渲染结果图像的显示范围,也可以是目标子对象模型和/或目标子场景模型的部分块超出了渲染结果图像的显示范围,具体的设置方式根据实际需求而定,本公开在此不做限定。在一个具体的实施例中,参见图5,如图5中(5-1)所示,预先将地面场景分为5个子场景模型,从左到右的各个块分别记为块501、块502、块502、块504、块505,此时的状态为将该地面场景渲染在目标图像之后的渲染结果图像的初始状态,令该地面场景中块501为目标子场景模型,目标子场景模型501目前所在的位置为该目标子场景模型501的起始位置,且目标子场景模型501的运动方向为从左往右,目标子场景模型501的运动路径为整个显示的地面场景。其中,渲染结果图像的显示范围即为该地面场景中5个子场景模型所显示的范围。经过一段时间的运动,地面场景如图5中(5-2)所示,其中,此时目标子场景模型501的位置如图所述。再经过一段时间的运动,目标子场景模型501的位置如图5中(5-3)所示,目标子场景模型501已经运动超出了地面场景的显示范围,即目标子场景模型501运动超出了渲染结果图像的显示范围;此时,针对于运动超出了渲染结果图像的显示范围的目标子场景模型501,将目标子场景模型501快速移动回目标子场景模型501运动路径上的初始位置,快速移动的过程如图5中(5-3)至图5中(5-4)至图5中(5-1)所示(目标子场景模型501的位置如图5中(5-3)、图5中(5-4)以及图5中(5-1)所示),最终将该目标子场景模型501重新渲染在目标子场景模型501的运动路径上的起始位置,即如图5中(5-1)中目标子场景模型501所在的位置,实现的效果为整个场景呈现为处于一定循环运动的状态的动画效果。当然,上述分块的方式并不限定于上述子场景模型的分块方式,例如,根据实际情况和实际需求,块的形状、大小、数量、运动方式等任意一种或多种参数可以与图中所示的情况不同。The target sub-object model and/or the target sub-scene model exceeds the display range of the rendering result image during the movement. It may be that some blocks of the target sub-object model and/or the target sub-scene model exceed the display range of the rendering result image, and the specific setting method depends on actual requirements, which is not limited in this disclosure. In a specific embodiment, referring to FIG. 5, as shown in (5-1) in FIG. 5, the ground scene is divided into 5 sub-scene models in advance, and each block from left to right is respectively marked as block 501 and block 502 , block 502, block 504, and block 505, the state at this time is the initial state of the rendering result image after rendering the ground scene after the target image, let block 501 in the ground scene be the target sub-scene model, and the target sub-scene model 501 The current position is the starting position of the target sub-scene model 501, and the movement direction of the target sub-scene model 501 is from left to right, and the movement path of the target sub-scene model 501 is the entire displayed ground scene. Wherein, the display range of the rendering result image is the range displayed by the five sub-scene models in the ground scene. After a period of movement, the ground scene is shown as (5-2) in FIG. 5 , where the position of the target sub-scene model 501 is as shown in the figure. After a period of movement, the position of the target sub-scene model 501 is shown in (5-3) in FIG. The display range of the rendering result image; at this time, for the target sub-scene model 501 whose movement exceeds the display range of the rendering result image, quickly move the target sub-scene model 501 back to the initial position on the motion path of the target sub-scene model 501, quickly The process of moving is as shown in (5-3) in Figure 5 to (5-4) in Figure 5 to (5-1) in Figure 5 (the position of the target sub-scene model 501 is shown in (5-3) in Figure 5, (5-4) in Fig. 5 and (5-1) shown in Fig. 5), finally re-render the target sub-scene model 501 at the starting position on the motion path of the target sub-scene model 501, as shown in Fig. 5 In (5-1), where the target sub-scene model 501 is located, the effect achieved is that the entire scene is in a state of animation in a certain cyclic motion. Of course, the above-mentioned block division method is not limited to the above-mentioned sub-scene model block method. For example, according to the actual situation and actual needs, any one or more parameters such as the shape, size, quantity, and movement mode of the blocks can be compared with those in the figure. The situation shown in is different.
在一个基于渲染了3D对象模型和3D场景模型的实施例中,3D对象模型包括多个子对象模型,3D场景模型包括多个子场景模型,一定数量的子对象模型和/或子场景模型沿着各自运动路径进行运动。参见图6,图6为一段渲染了3D对象模型和3D场景模型的视频中的三帧图像。在本公开实施例中,目标图像可以是目标图像帧,对此不作限定。如图6中(6-1)所示,在3D场景模型上方有一个太阳和一朵云,此时将太阳和云总体设置为目标子场景模型601,目标子场景模型601在运动了一段时间后显示的位置如图6中(6-2)所示。当目标子场景模型601再运动一段时间后,显示的位置如图6中(6-3)所示,此时整个目标子场景模型即将移动超出渲染结果图像的显示范围;当整个目标子场景模型601移动超出渲染结果图像的显示范围的时候,将目标子场景模型601快速移动回图6中(6-1)中所在的初始位置,并维持目标子场景模型601的运动,使得展示的场景为无限循环的动画效果,具有较高的趣味性。当然,本公开此处只是作举例说明,并步作为限定。In an embodiment based on rendering a 3D object model and a 3D scene model, the 3D object model includes a plurality of sub-object models, the 3D scene model includes a plurality of sub-scene models, and a certain number of sub-object models and/or sub-scene models are along their respective Motion path for motion. Referring to FIG. 6, FIG. 6 is a three-frame image in a video in which a 3D object model and a 3D scene model are rendered. In the embodiment of the present disclosure, the target image may be a target image frame, which is not limited thereto. As shown in Figure 6 (6-1), there is a sun and a cloud above the 3D scene model. At this time, the sun and the cloud are generally set as the target sub-scene model 601, and the target sub-scene model 601 has been moving for a period of time The post-display position is shown as (6-2) in Figure 6. After the target sub-scene model 601 moves for a period of time, the displayed position is as shown in (6-3) in Figure 6. At this time, the entire target sub-scene model is about to move beyond the display range of the rendering result image; when the entire target sub-scene model When 601 moves beyond the display range of the rendering result image, quickly move the target sub-scene model 601 back to the initial position in (6-1) in FIG. 6 , and maintain the movement of the target sub-scene model 601, so that the displayed scene is The animation effect of infinite loop has high interest. Certainly, the present disclosure here is only for illustration, not for limitation.
在获取的目标图像中,存在获取的多个图像中,目标对象的朝向不同。目标对象的朝向可以是目标对象所面对的方向,例如目标对象为人脸时,人脸的朝向即为目标对象的朝向;目标对象为人的手时,目标对象的朝向可以是手心的朝向,也可以是手指的朝向等等,可以根据具体场景而预先设定。在本公开一个具体的实施例中,如图2、图7中(7-1)、图7中(7-2)所示,这三张图中的女孩的脸的朝向是不同的。为了使得渲染的效果更加逼真,更具有趣味性,在渲染3D对象模型和3D场景模型之前,还可以先获取目标对象的朝向,基于目标对象的朝向调整3D对象模型和/或3D场景模型的朝向。具体的,可以是先确定目标对象的朝向,根据目标对象的朝向确定3D对象模型和/或3D场景模型的朝向,例如调整3D对象模型的朝向以使渲染结果中目标对象的朝向与3D对象模型的朝向一致。例如,目标对象的朝向向左,则调整3D对象模型以使渲 染结果中3D对象模型的朝向也向左。In the acquired target image, there are multiple acquired images with different orientations of the target object. The orientation of the target object can be the direction the target object is facing. For example, when the target object is a human face, the orientation of the human face is the orientation of the target object; when the target object is a human hand, the orientation of the target object can be the orientation of the palm, or It can be the direction of the finger, etc., and can be preset according to the specific scene. In a specific embodiment of the present disclosure, as shown in FIG. 2 , ( 7 - 1 ) in FIG. 7 , and ( 7 - 2 ) in FIG. 7 , the orientations of the girls' faces in these three pictures are different. In order to make the rendered effect more realistic and interesting, before rendering the 3D object model and 3D scene model, the orientation of the target object can be obtained first, and the orientation of the 3D object model and/or 3D scene model can be adjusted based on the orientation of the target object . Specifically, the orientation of the target object may be determined first, and the orientation of the 3D object model and/or the 3D scene model may be determined according to the orientation of the target object, for example, the orientation of the 3D object model may be adjusted so that the orientation of the target object in the rendering result is consistent with the orientation of the 3D object model. in the same direction. For example, if the orientation of the target object is to the left, the 3D object model is adjusted so that the orientation of the 3D object model in the rendering result is also to the left.
其中,获取目标对象的朝向的方式可以是提取目标图像的特征,基于朝向识别的算法计算目标对象的朝向;也可以是预先训练用于确定目标对象的朝向的神经网络,利用该神经网络确定目标对象的朝向,对此本公开实施例不作限定。Among them, the way to obtain the orientation of the target object can be to extract the features of the target image, and calculate the orientation of the target object based on the algorithm of orientation recognition; it can also be to pre-train the neural network used to determine the orientation of the target object, and use the neural network to determine the target object. The orientation of the object is not limited by this embodiment of the present disclosure.
在本公开实施例中,如图4、图8中(8-1)、图8中(8-2)所示,分别为渲染上述三张图(图2、图7中(7-1)、图7中(7-2))的效果图。在渲染3D对象模型和3D场景模型之前,根据女孩的脸(目标对象)的朝向,调整3D对象模型和3D场景模型的朝向,使得女孩的脸的朝向与3D对象模型和3D场景模型的朝向相匹配,使最终渲染出来的效果更真实、更具有趣味性。在一个实施例中,可以通过solve PnP算法(位姿估计鲁棒算法),以及利用检测到人脸上的关键点和标准3D人头上的对应点计算3D对象模型和3D场景模型的transform矩阵(变换矩阵),进而对3D对象模型和3D场景模型进行朝向的调整。当然,可以根据实际需求,可以基于目标对象的朝向只调整3D对象模型的朝向而不调整3D场景模型的朝向;或者只调整3D场景模型的朝向而不调整3D对象模型的朝向,本公开实施例在此不做限定。该实施例中,目标对象为女孩的脸,在实际的场景中,目标对象还可以是人的手,则目标对象的朝向即为手指的朝向;目标对象还可以是一辆车,目标对象的朝向即为车头的朝向等等,本公开在此不做限定。调整3D对象模型朝向的角度参数可以根据实际需求而定,可以调整为3D对象模型的朝向与目标对象的朝向相同,也可以调整为3D对象模型的朝向与目标对象的朝向始终垂直,或者调整为3D对象模型的朝向与目标对象的朝向呈任意角度等等。调整3D场景模型的朝向时,可以调整为3D场景模型的朝向与目标对象的朝向相同,也可以调整为3D场景模型的朝向与目标对象的朝向始终垂直,或者调整为3D场景模型的朝向与目标对象的朝向呈任意角度等等,本公开实施例在此不做限定。In the embodiment of the present disclosure, as shown in Fig. 4, (8-1) in Fig. 8, and (8-2) in Fig. 8, the above three pictures (Fig. 2, (7-1) in Fig. 7) are respectively rendered , (7-2) in Fig. 7). Before rendering the 3D object model and the 3D scene model, adjust the orientation of the 3D object model and the 3D scene model according to the orientation of the girl's face (target object), so that the orientation of the girl's face is consistent with the orientation of the 3D object model and the 3D scene model Matching to make the final rendered effect more realistic and interesting. In one embodiment, the transform matrix of the 3D object model and the 3D scene model ( transformation matrix), and then adjust the orientation of the 3D object model and the 3D scene model. Of course, based on actual needs, only the orientation of the 3D object model can be adjusted without adjusting the orientation of the 3D scene model based on the orientation of the target object; or only the orientation of the 3D scene model can be adjusted without adjusting the orientation of the 3D object model. It is not limited here. In this embodiment, the target object is a girl's face. In an actual scene, the target object can also be a human hand, and the orientation of the target object is the orientation of the finger; the target object can also be a car, and the target object's The orientation refers to the orientation of the front of the vehicle, etc., which is not limited in the present disclosure. The angle parameters for adjusting the orientation of the 3D object model can be determined according to actual needs. It can be adjusted so that the orientation of the 3D object model is the same as the orientation of the target object, or it can be adjusted so that the orientation of the 3D object model is always perpendicular to the orientation of the target object, or adjusted to The orientation of the 3D object model is at any angle to the orientation of the target object, etc. When adjusting the orientation of the 3D scene model, it can be adjusted so that the orientation of the 3D scene model is the same as that of the target object, or it can be adjusted so that the orientation of the 3D scene model is always perpendicular to the orientation of the target object, or it can be adjusted so that the orientation of the 3D scene model is the same as that of the target The orientation of the object is at any angle, etc., which are not limited in this embodiment of the present disclosure.
在一个基于目标对象的朝向调整3D对象模型的例子中,参见图9中(9-1),图中箭头为图中渲染相机的视角,右侧豹子模型为3D对象模型,假设此时目标对象的朝向为向右,此时豹子模型的朝向也向右;当目标对象的朝向为向左时,渲染出来的效果如图9中(9-2)所示,此时豹子模型的朝向也调整为朝向左。在本实施例中,渲染相机的视角并未改变,本实施例只是作为举例说明,并不作为限定。In an example of adjusting the 3D object model based on the orientation of the target object, see (9-1) in Figure 9, the arrow in the figure is the perspective of the rendering camera in the figure, and the leopard model on the right is the 3D object model, assuming the target object at this time The orientation of the object is right, and the orientation of the leopard model is also to the right; when the orientation of the target object is to the left, the rendered effect is shown in Figure 9 (9-2), and the orientation of the leopard model is also adjusted at this time is facing left. In this embodiment, the viewing angle of the rendering camera does not change, and this embodiment is only used as an example rather than a limitation.
通过上述方式,基于目标对象的朝向调整3D对象模型和/或3D场景模型的朝向,进而使得渲染结果中3D对象模型和/或3D场景模型的显示方式多样化,提高趣味性。Through the above method, the orientation of the 3D object model and/or the 3D scene model is adjusted based on the orientation of the target object, thereby diversifying the display modes of the 3D object model and/or the 3D scene model in the rendering result and improving the interest.
在渲染3D对象模型和3D场景模型之前,除了可以基于目标对象的朝向调整3D对象模型和3D场景模型的朝向,还可以基于目标对象的朝向调整渲染相机的角度参数。其中,渲染相机的角度参数即为渲染相机渲染3D对象模型和/或3D场景模型时,渲染相机相对于3D对象模型和/或3D场景模型的角度。具体的,可以是先确定目标对象的朝向,根据目标对象的朝向确定渲染相机渲染的角度,例如调整渲染相机的渲染的角度以使渲染的结果中目标对象的朝向与3D对象模型的朝向一致。例如,目标对象的朝向向左,则调整渲染相机的渲染的角度以使渲染结果中3D对象模型的朝向也是向左。Before rendering the 3D object model and the 3D scene model, in addition to adjusting the orientation of the 3D object model and the 3D scene model based on the orientation of the target object, an angle parameter of the rendering camera may also be adjusted based on the orientation of the target object. Wherein, the angle parameter of the rendering camera is the angle of the rendering camera relative to the 3D object model and/or the 3D scene model when the rendering camera renders the 3D object model and/or the 3D scene model. Specifically, the orientation of the target object may be determined first, and the rendering angle of the rendering camera is determined according to the orientation of the target object, for example, the rendering angle of the rendering camera is adjusted so that the orientation of the target object in the rendering result is consistent with the orientation of the 3D object model. For example, if the orientation of the target object is to the left, the rendering angle of the rendering camera is adjusted so that the orientation of the 3D object model in the rendering result is also to the left.
在一个具体的实施例中,图2、图7中(7-1)、图7中(7-2)中女孩的脸的朝向不同,根据女孩的脸的朝向,不调整3D对象模型和3D场景模型的朝向,而是调整渲染相机的角度参数,最终渲染出来的效果如图4、图8中(8-1)、图8中(8-2)所示。可见,根据女孩的脸的朝向调整渲染相机的角度参数,也可以使渲染的效果更加逼真,更具有趣味性。调整渲染相机的角度参数可以根据实际需求而定,可以调整为渲染相机的角度与目标对象的朝向相同或相反;也可以调整为渲染相机的角度与目标对象的朝向呈任意角度等等,本公开实施例在此不做限定。In a specific embodiment, the faces of girls in Fig. 2, (7-1) in Fig. 7, and (7-2) in Fig. 7 have different orientations, and the 3D object model and the 3D Instead, adjust the angle parameters of the rendering camera instead of the orientation of the scene model, and the final rendered effect is shown in Figure 4, (8-1) in Figure 8, and (8-2) in Figure 8. It can be seen that adjusting the angle parameters of the rendering camera according to the orientation of the girl's face can also make the rendering effect more realistic and interesting. Adjusting the angle parameters of the rendering camera can be determined according to actual needs. It can be adjusted so that the angle of the rendering camera is the same as or opposite to the direction of the target object; it can also be adjusted so that the angle of the rendering camera and the direction of the target object are at any angle. The embodiments are not limited here.
在一个基于目标对象的朝向调整渲染相机的角度参数的例子中,参见图9中(9-1),图中箭头为图中渲染相机的视角,右侧豹子模型为3D对象模型,假设此时目标对象的朝向为向右,此时渲染相机的视角朝向向右;当目标对象的朝向为向左时,调整渲染相 机的角度参数,如图9中(9-3)所示,此时渲染相机的视角朝向向左。在本实施例中,3D对象模型的朝向并未改变。本实施例只是作为举例说明,并不作为限定。In an example of adjusting the angle parameters of the rendering camera based on the orientation of the target object, see (9-1) in Figure 9, the arrow in the figure is the angle of view of the rendering camera in the figure, and the leopard model on the right is the 3D object model, assuming that at this time The orientation of the target object is to the right, at this time the viewing angle of the rendering camera is oriented to the right; when the orientation of the target object is to the left, adjust the angle parameters of the rendering camera, as shown in (9-3) in Figure 9, at this time the rendering The view angle of the camera is facing left. In this embodiment, the orientation of the 3D object model does not change. This embodiment is only used as an illustration, not as a limitation.
通过上述方式,基于目标对象的朝向调整渲染相机的角度参数,也可以使得渲染结果中3D对象模型和/或3D场景模型的显示方式多样化,提高趣味性。In the above manner, adjusting the angle parameter of the rendering camera based on the orientation of the target object can also diversify the display modes of the 3D object model and/or the 3D scene model in the rendering result and improve the interest.
当然,基于实际应用中的需求,还可以基于目标对象朝向,同时调整3D对象模型和/或3D场景模型的朝向,以及渲染相机的角度参数。具体调整的方式,本公开实施例不做限定。Of course, based on the requirements in practical applications, the orientation of the 3D object model and/or the 3D scene model and the angle parameter of the rendering camera can also be adjusted simultaneously based on the orientation of the target object. The specific adjustment manner is not limited in the embodiments of the present disclosure.
在实际拍摄过程中或者选取的图像中,存在目标图像中同时出现多个目标对象的情况。在目标图像中包括多个目标对象的情况下,渲染3D对象模型的方法还包括:获取多个3D对象模型,3D对象模型与目标对象一一对应,其中,每个3D对象模型中均包括目标区域。可以是预先检测到目标图像中存在多个目标对象的情况下,获取与目标对象数量相等的包括目标区域的3D对象模型,最终实现将每个目标对象分别显示于一个3D对象模型的目标区域中。丰富了目标图像存在多个目标对象时的渲染方式,提升趣味性。In the actual shooting process or in the selected image, there are situations where multiple target objects appear in the target image at the same time. In the case that the target image includes multiple target objects, the method for rendering the 3D object model further includes: acquiring multiple 3D object models, the 3D object models correspond to the target objects one by one, wherein each 3D object model includes the target area. It may be that when multiple target objects are detected in the target image in advance, 3D object models including target areas equal to the number of target objects are obtained, and finally each target object is displayed in the target area of a 3D object model . Enriched the rendering method when there are multiple target objects in the target image to improve the fun.
基于该方法,获取多个3D对象模型,将每个目标对象分别显示在与目标对象一一对应的3D对象模型中,最终渲染出来的效果为:渲染结果图像中包括了多个3D对象模型,每个3D对象模型中都包括一个目标对象。根据实际应用场景的需求,不同的目标对象对应的3D对象模型可以是同一个3D对象模型,也可以是不同的3D对象模型。例如,目标图像中包括3个目标对象的情况下,在一种情况中,获取3个相同的小猪模型(3D对象模型),每个目标对象对应一个小猪模型,进而再进行特效渲染;在另一种情况中,获取3个不相同的模型,如一个小猪模型,一个小狗模型和一个小羊模型,每个目标对象分别对应其中一个3D对象模型,具体对应的方式在此不做限定。在另一个情况中,获取的3个3D对象模型为2个小猪模型和1个小羊模型,每个目标对象分别对应其中一个3D对象模型,具体的对应方式在此不做限定。Based on this method, multiple 3D object models are obtained, and each target object is displayed in a 3D object model corresponding to the target object, and the final rendered effect is as follows: the rendering result image includes multiple 3D object models, Each 3D object model includes a target object. According to requirements of actual application scenarios, the 3D object models corresponding to different target objects may be the same 3D object model, or may be different 3D object models. For example, when the target image includes 3 target objects, in one case, obtain 3 identical piggy models (3D object models), and each target object corresponds to a piggy model, and then perform special effect rendering; In another case, obtain 3 different models, such as a pig model, a puppy model and a lamb model, and each target object corresponds to one of the 3D object models, and the specific corresponding method is not described here. Do limited. In another case, the three acquired 3D object models are two piglet models and one lamb model, and each target object corresponds to one of the 3D object models, and the specific corresponding manner is not limited here.
在一些实施例中,可以先对获取的目标图像进行识别,确定目标图像中是否含有多个目标对象;若是,则进一步进行识别,根据识别的结果从多个3D对象模型中确定每个目标对象对应的3D对象模型,进行一一匹配显示。可以是识别目标图像的特征,确定与目标图像的特征相对应的3D对象模型与目标图像进行匹配显示。In some embodiments, the acquired target image can be firstly identified to determine whether the target image contains multiple target objects; if so, further identification is performed, and each target object is determined from multiple 3D object models according to the identification result The corresponding 3D object models are displayed in one-to-one matching. It may be to identify the features of the target image, determine the 3D object model corresponding to the features of the target image and match and display the target image.
在一些具体的实施例中,在多人自拍的场景下,此时目标对象为人脸,若目标图像中同时出现了2个人脸,可以获取2个3D对象模型,其中,2个人脸与2个3D对象模型一一对应,两个3D对象模型中均包括目标区域。最终渲染出来的效果即为:渲染结果图像中存在两个3D对象模型,2个3D对象模型中的目标区域分别显示上述2个人脸。通过上述方法,可以实现将多个目标对象都分别融入于多个3D对象模型中,使得目标图像中的多个目标对象同时体验到本公开实施例中的特效渲染的趣味性和沉浸感。In some specific embodiments, in the scene where multiple people take selfies, the target object is a human face at this time, if two human faces appear in the target image at the same time, two 3D object models can be obtained, wherein the two human faces and the two The 3D object models are in one-to-one correspondence, and both 3D object models include the target area. The final rendering effect is: there are two 3D object models in the rendering result image, and the target areas in the two 3D object models respectively display the above two human faces. Through the above method, it is possible to integrate multiple target objects into multiple 3D object models, so that multiple target objects in the target image can simultaneously experience the interest and immersion of the special effect rendering in the embodiment of the present disclosure.
在一些实施例中,目标对象为人脸,3D对象模型中包括虚拟人脸模型,还可以对目标图像中的人脸进行人脸识别,得到人脸识别的结果,基于人脸识别的结果确定该人脸对应的3D对象模型。在一个具体的实施例中,目标图像中包括2个人脸的情况下,获取2个不相同的3D对象模型,例如1个小猪模型和1个小羊模型;可以获取这2个人脸的特征信息,例如这2个人脸的眼睛信息、鼻子信息、嘴巴信息等等;基于人脸的特征信息,进行人脸识别,进而确定每个人脸所对应的身份,基于每个人脸的身份确定每个人脸对应的3D对象模型。例如,2个人脸分别为人脸A、人脸B,基于人脸的特征信息,进行人脸识别,确定人脸A和人脸B的身份;并确定人脸A对应的3D对象模型为小猪模型,确定人脸B对应的3D对象模型为小羊模型。基于上述方法,可以使每个不同的人脸各自对应一个不同的3D对象模型,以增强特效渲染的趣味性、真实性。例如,当用户A和用户B在拍摄时,终端设备通过人脸识别,确定用户A的脸对应小猪模型,用户B的脸对应小羊模型;拍摄时即使用户A的脸在一段时间内离开了摄像 头能捕捉到的范围,即在一段时间内用户A的脸并未显示在目标图像中,之后用户A的脸再次出现在目标图像中时,终端设备通过人脸识别,依旧能够获取与用户A的脸对应的小猪模型,并将用户A的脸显示在小猪模型中。当然,上述实施例中人脸的数量在此并不作为限定。在确定人脸与3D对象模型的对应关系时可以根据实际场景进行定义,例如,更大的人脸对应小猪模型等等;对此,本公开实施例不做限定。In some embodiments, the target object is a human face, and the 3D object model includes a virtual human face model. It is also possible to perform face recognition on the human face in the target image to obtain a face recognition result, and determine the face recognition result based on the face recognition result. The 3D object model corresponding to the human face. In a specific embodiment, when the target image includes two human faces, two different 3D object models are obtained, such as a piglet model and a lamb model; the features of the two human faces can be obtained Information, such as the eye information, nose information, mouth information, etc. of the two faces; face recognition is performed based on the feature information of the faces, and then the identity corresponding to each face is determined, and the identity of each face is determined based on the identity of each face. The corresponding 3D object model of the face. For example, the two faces are face A and face B, face recognition is performed based on the feature information of the faces, and the identities of face A and face B are determined; and the 3D object model corresponding to face A is determined to be a pig model, and determine the 3D object model corresponding to face B as a lamb model. Based on the above method, each different human face can be made to correspond to a different 3D object model, so as to enhance the interest and authenticity of special effect rendering. For example, when user A and user B are taking pictures, the terminal device determines through face recognition that user A's face corresponds to the piggy model, and user B's face corresponds to the lamb model; The range that the camera can capture, that is, the face of user A is not displayed in the target image for a period of time, and then when user A's face appears in the target image again, the terminal device can still obtain the same information as the user A through face recognition. A's face corresponds to the piggy model, and user A's face is displayed in the piggy model. Of course, the number of human faces in the above embodiment is not limited here. When determining the correspondence between a human face and a 3D object model, it may be defined according to an actual scene, for example, a larger human face corresponds to a piggy model, etc.; this is not limited in this embodiment of the present disclosure.
在上述实施例中,目标对象还可以是例如一朵花,则属性信息可以是花的类型,花的颜色等等;目标对象还可以是例如建筑物,则属性信息可以是建筑物的高度,建筑物的颜色等等。对此,本公开实施例不作限定。In the foregoing embodiment, the target object can also be, for example, a flower, and the attribute information can be the type of flower, the color of the flower, etc.; the target object can also be, for example, a building, and the attribute information can be the height of the building, The color of the building and so on. In this regard, the embodiment of the present disclosure does not make a limitation.
在用户实际拍摄过程中或者用户选取的图像中,为了进一步增强特效渲染的趣味性,还可以识别目标对象的属性信息,3D对象模型与对应的目标对象的属性信息相匹配。During the actual shooting process of the user or in the image selected by the user, in order to further enhance the interest of the special effect rendering, the attribute information of the target object can also be identified, and the 3D object model matches the attribute information of the corresponding target object.
可以是先检测目标对象的属性信息,当目标对象为人时,属性信息可以是例如年龄、性别等等;当目标对象为一朵花时,属性信息可以是例如颜色、花种类等等。再根据目标对象的属性信息确定与目标对象的属性信息相匹配的3D对象模型,用于渲染。在一些具体的实施例中,当目标对象为人脸时,可以根据人脸上的各个特征点的信息计算出用户的性别、年龄等用户属性信息;例如,根据用户的性别信息,获取与用户性别信息匹配的3D对象模型;例如,目标对象为男性的人脸时,获取短头发、肌肉较为发达的3D对象模型;目标对象为女性的人脸时,获取长头发、身材纤细的3D对象模型。在另一个例子中,根据人脸的各个特征点的信息计算出用户的年龄属性,根据用户的年龄属性,获取与用户年龄属性匹配的3D对象模型;例如,目标对象为儿童的人脸时,获取可爱的小动物的3D对象模型;目标对象为成年人的人脸时,获取凶猛的野兽的3D对象模型等等。当然,除了上述列举的人脸属性信息的例子之外,还可以包括很多其它人脸属性信息,比如是否佩戴眼镜等等。除了人脸属性信息之外,还可以包括其它属性信息,例如当目标对象为人手时,手的属性如手的大小、手衰老的程度等;目标对象为一棵树时,树的属性可以是数的类型、树的高度等等。针对于以上列举的属性示例,都可以基于目标对象的属性信息,确定预先制作与上述属性相匹配的3D对象模型。上述皆为列举的一些实施例,都是举例说明,在此本公开并不作为限定。It may be to first detect the attribute information of the target object. When the target object is a person, the attribute information may be, for example, age, gender, etc.; when the target object is a flower, the attribute information may be, for example, color, flower type, etc. Then, according to the attribute information of the target object, a 3D object model matching the attribute information of the target object is determined for rendering. In some specific embodiments, when the target object is a human face, user attribute information such as the user's gender and age can be calculated according to the information of each feature point on the human face; Information matching 3D object model; for example, when the target object is a male face, obtain a 3D object model with short hair and more muscular body; when the target object is a female face, obtain a 3D object model with long hair and a slender figure. In another example, the user's age attribute is calculated according to the information of each feature point of the face, and a 3D object model matching the user's age attribute is obtained according to the user's age attribute; for example, when the target object is a child's face, Get 3D object models of cute baby animals; get 3D object models of ferocious beasts when the target object is an adult human face, and more. Of course, in addition to the examples of face attribute information listed above, it may also include many other face attribute information, such as whether glasses are worn or not. In addition to face attribute information, other attribute information can also be included. For example, when the target object is a human hand, the attributes of the hand such as the size of the hand, the degree of aging of the hand, etc.; when the target object is a tree, the attribute of the tree can be The type of number, the height of the tree, and so on. Regarding the attribute examples listed above, based on the attribute information of the target object, it is determined to prefabricate a 3D object model that matches the above attributes. The above-mentioned examples are all examples, and the present disclosure is not intended as a limitation.
在一个具体的实施例中,3D对象模型包括虚拟人脸模型时,不仅可以根据目标对象的属性信息来获取与目标对象属性信息匹配的3D对象模型,还可以根据目标对象的属性信息确定所要渲染的3D场景模型的属性,具体实现为,渲染3D场景模型还包括:获取目标对象的属性信息;根据目标对象的属性信息,确定3D场景模型的属性信息,基于3D场景中非覆盖区域的深度信息和颜色信息以及3D场景模型的属性信息,渲染3D场景模型。In a specific embodiment, when the 3D object model includes a virtual human face model, not only can the 3D object model matching the attribute information of the target object be obtained according to the attribute information of the target object, but also can be determined according to the attribute information of the target object. The attributes of the 3D scene model are specifically implemented as follows: rendering the 3D scene model also includes: obtaining the attribute information of the target object; according to the attribute information of the target object, determining the attribute information of the 3D scene model, based on the depth information of the non-covered area in the 3D scene and color information and attribute information of the 3D scene model to render the 3D scene model.
可以是先检测目标对象的属性信息,根据目标对象的属性信息确定与目标对象的属性信息相匹配的3D场景模型的属性信息,进而确定出所要渲染的3D场景模型,用于渲染。在一个具体的实施例中,例如目标对象为人脸时,可以根据人脸上的各个特征点的信息计算出用户的性别、年龄等用户属性信息;例如根据用户的年龄信息,确定与用户年龄信息匹配的3D场景模型的属性信息,例如目标对象为儿童的脸,则确定3D场景模型的属性为儿童属性,则渲染包括了儿童元素在内的3D场景模型,儿童元素可以包括例如可爱的卡通角色等等;若目标对象为成年人的脸,则确定3D场景模型的属性为成年人属性,则渲染包括了成年人元素在内的3D场景模型,成年人元素可以包括例如凶猛的野兽等等。若根据用户的性别属性确定3D场景模型的属性信息的话,根据用户的女性属性可以确定包括了女性元素在内的3D场景模型,根据用户的男性属性可以确定包括了男性元素在内的3D场景模型,其中,女性属性可以是3D场景模型的颜色为粉色系列等等,男性属性可以是3D场景模型的颜色为蓝色系列等等。本公开实施例在此只是进行了举例说明,并不作为限定。It may first detect the attribute information of the target object, determine the attribute information of the 3D scene model matching the attribute information of the target object according to the attribute information of the target object, and then determine the 3D scene model to be rendered for rendering. In a specific embodiment, for example, when the target object is a human face, user attribute information such as the user's gender and age can be calculated according to the information of each feature point on the human face; The attribute information of the matched 3D scene model, for example, the target object is a child's face, then determine that the attribute of the 3D scene model is a child attribute, then render the 3D scene model including children's elements, and the children's elements can include, for example, cute cartoon characters etc.; if the target object is the face of an adult, determine that the attribute of the 3D scene model is an attribute of an adult, then render the 3D scene model including adult elements, and the adult elements may include, for example, ferocious beasts and the like. If the attribute information of the 3D scene model is determined according to the user's gender attribute, the 3D scene model including female elements can be determined according to the user's female attribute, and the 3D scene model including male elements can be determined according to the user's male attribute , wherein, the female attribute can be that the color of the 3D scene model is pink series, etc., and the male attribute can be that the color of the 3D scene model is blue series, etc. The embodiments of the present disclosure are only used for illustration and not for limitation.
在上述实施例中,目标对象还可以是例如一朵花,则属性信息可以是花的类型,花 的颜色等等;目标对象还可以是例如建筑物,则属性信息可以是建筑物的高度,建筑物的颜色等等。对此,本公开实施例不作限定。In the foregoing embodiment, the target object can also be, for example, a flower, and the attribute information can be the type of flower, the color of the flower, etc.; the target object can also be, for example, a building, and the attribute information can be the height of the building, The color of the building and so on. In this regard, the embodiment of the present disclosure does not make a limitation.
在一些实施例中,可以是先确定目标对象的属性信息,根据目标对象的属性信息确定相应属性信息的第一3D场景模型。在渲染第一3D场景模型时,目标图像上的点都分别对应第一3D场景模型上的点。针对第一3D场景模型中对应于非目标区域的像素点集合M1,确定像素点集合M1中每个点的深度信息以及颜色信息,再将像素点集合M1中每个点的深度信息和颜色信息赋予目标图像上相对应的像素点;针对于第一3D场景模型中覆盖区域的像素点集合M2,确定像素点集合M2中每个点的深度信息,将像素点集合M2中每个点的深度信息赋予目标图像上相对应的像素点。基于上述方式,将第一3D场景模型渲染于目标图像上。In some embodiments, the attribute information of the target object may be determined first, and the first 3D scene model corresponding to the attribute information is determined according to the attribute information of the target object. When rendering the first 3D scene model, the points on the target image respectively correspond to the points on the first 3D scene model. For the pixel point set M1 corresponding to the non-target area in the first 3D scene model, determine the depth information and color information of each point in the pixel point set M1, and then calculate the depth information and color information of each point in the pixel point set M1 Assign the corresponding pixel points on the target image; for the pixel point set M2 of the coverage area in the first 3D scene model, determine the depth information of each point in the pixel point set M2, and set the depth information of each point in the pixel point set M2 The information is assigned to the corresponding pixel on the target image. Based on the above method, the first 3D scene model is rendered on the target image.
在一些实施例中,还可以是先确定目标对象的属性信息,根据目标对象的属性信息确定3D场景模型的属性信息。在渲染所述3D场景模型时,目标图像上的点都分别对应3D场景模型上的点。针对3D场景模型中对应于非目标区域的像素点集合N1,确定像素点集合N1中每个点的深度信息以及颜色信息,再根据上述属性信息按照一定的预设规则改变所述颜色信息;将像素点集合N1中每个点的深度信息和改变后的颜色信息赋予目标图像上相对应的像素点;针对于3D场景模型中覆盖区域中的像素点集合N2,确定像素点集合N2中每个点的深度信息,将像素点集合N2中每个点的深度信息赋予目标图像上与像素点集合N2中各个点相对应的像素点。基于上述方式,将3D场景模型渲染于目标图像上。In some embodiments, the attribute information of the target object may also be determined first, and the attribute information of the 3D scene model is determined according to the attribute information of the target object. When rendering the 3D scene model, the points on the target image correspond to the points on the 3D scene model. For the pixel point set N1 corresponding to the non-target area in the 3D scene model, determine the depth information and color information of each point in the pixel point set N1, and then change the color information according to certain preset rules according to the above attribute information; The depth information and the changed color information of each point in the pixel point set N1 are assigned to the corresponding pixel points on the target image; for the pixel point set N2 in the coverage area in the 3D scene model, determine each pixel point set N2 The depth information of the point is to assign the depth information of each point in the pixel point set N2 to the pixel point corresponding to each point in the pixel point set N2 on the target image. Based on the above method, the 3D scene model is rendered on the target image.
以上实施例均为本公开实施例的举例说明。The above embodiments are all illustrations of the embodiments of the present disclosure.
参见图10,图10为本公开一个实施例的具体流程图。Referring to FIG. 10 , FIG. 10 is a specific flowchart of an embodiment of the present disclosure.
根据S1002,获取目标图像,所述目标图像中包括目标对象;目标图像的可以是在用户实时通过摄像头拍摄的过程中获取的目标图像,也可以是在预先拍摄好的视频或者图像中获取的目标图像;目标对象可以是人脸、人手或人脚等身体部位,也可以是小动物、植物,还可以是建筑物等等,用户可以根据实际的拍摄场景修改目标对象的类型;目标对象的个数可以是一个或者多个。根据S1004,获取预先制作好的虚拟的3D对象模型和3D场景模型,3D对象模型中包括目标区域;3D对象模型位于3D场景模型中。其中,获取的3D对象模型可以是针对一个目标对象获取一个3D对象模型,也可以是针对多个目标对象获取等量的3D对象模型;其中,上述等量的3D对象模型可以是等量的相同的3D对象模型,也可以是等量的不相同的3D对象模型,在此本公开不做限定。根据S1006,基于所述3D场景模型的深度信息以及所述3D对象模型上除所述目标区域以外的非目标区域的颜色信息,以及基于所述3D场景模型中除覆盖区域以外的非覆盖区域的深度信息和颜色信息,将所述3D对象模型和所述3D场景模型渲染到所述目标图像上以获得渲染结果,其中,所述目标对象显示在所述目标区域内,所述覆盖区域为渲染相机视角下被所述3D对象模型中目标区域覆盖的区域;其中,针对于渲染3D场景模型,可以在渲染3D场景模型之前剔除掉3D场景模型中的覆盖区域,再基于3D场景模型中非覆盖区域的深度信息和颜色信息渲染3D场景模型;也可以在渲染3D场景模型之前,并不剔除掉3D场景模型中的覆盖区域,而是基于3D场景模型的深度信息以及3D场景模型中非覆盖区域的颜色信息渲染3D场景模型。渲染结果图像上显示的效果为:目标对象显示在预先制作好的3D对象模型中的目标区域中,3D对象模型位于3D场景模型中。即,将目标对象融入于虚拟的3D对象模型和3D场景模型中,让用户更有身临其境的感觉,进而提高用户的体验感。According to S1002, the target image is obtained, and the target image includes the target object; the target image may be a target image obtained during the user's real-time shooting through the camera, or a target obtained in a pre-shot video or image Image; the target object can be a body part such as a human face, a human hand or a human foot, or a small animal, a plant, or a building, etc., and the user can modify the type of the target object according to the actual shooting scene; the individuality of the target object Number can be one or more. According to S1004, a prefabricated virtual 3D object model and a 3D scene model are obtained, the 3D object model includes the target area; the 3D object model is located in the 3D scene model. Wherein, the obtained 3D object model may be to obtain a 3D object model for one target object, and may also be to obtain an equal amount of 3D object models for multiple target objects; wherein, the above-mentioned equivalent 3D object models may be equivalent The 3D object models may also be equivalent and different 3D object models, which is not limited in this disclosure. According to S1006, based on the depth information of the 3D scene model and the color information of the non-target area on the 3D object model except the target area, and based on the color information of the non-covered area in the 3D scene model except the covered area Depth information and color information, rendering the 3D object model and the 3D scene model on the target image to obtain a rendering result, wherein the target object is displayed in the target area, and the coverage area is rendered The area covered by the target area in the 3D object model from the perspective of the camera; wherein, for rendering the 3D scene model, the coverage area in the 3D scene model can be removed before rendering the 3D scene model, and then based on the non-coverage in the 3D scene model The depth information and color information of the region render the 3D scene model; before rendering the 3D scene model, the coverage area in the 3D scene model is not removed, but based on the depth information of the 3D scene model and the non-coverage area in the 3D scene model The color information for rendering the 3D scene model. The effect displayed on the rendering result image is: the target object is displayed in the target area in the prefabricated 3D object model, and the 3D object model is located in the 3D scene model. That is, the target object is integrated into the virtual 3D object model and the 3D scene model, so that the user has a more immersive feeling, thereby improving the user's experience.
当然,以上所有实施例均是举例说明,其中的算法也只是举例说明,并不作为本公开的实施例的限定。本公开的实施例中可以使用一种或多种算法计算目标对象的属性信息、特征信息,本公开的实施例在此不展开阐述。Certainly, all the above embodiments are examples, and the algorithms therein are only examples, and are not intended to limit the embodiments of the present disclosure. In the embodiments of the present disclosure, one or more algorithms may be used to calculate the attribute information and feature information of the target object, and the embodiments of the present disclosure will not be elaborated here.
以上所述仅是本公开的实施例的具体实施方式,并不作为本方案的限定。应当指出, 对于本技术领域的普通技术人员来说,在不脱离本方案原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本方案的保护范围。The above descriptions are only specific implementation manners of the embodiments of the present disclosure, and are not intended to limit the present solution. It should be pointed out that those skilled in the art can make some improvements and modifications without departing from the principle of the solution, and these improvements and modifications should also be regarded as the protection scope of the solution.
参见图11,本公开实施例还提供一种特效渲染的装置,该装置包括:Referring to FIG. 11 , an embodiment of the present disclosure also provides a special effect rendering device, which includes:
第一获取模块1102,用于获取目标图像,所述目标图像中包括目标对象;A first acquisition module 1102, configured to acquire a target image, where the target image includes a target object;
第二获取模块1104,用于获取3D对象模型,所述3D对象模型包括目标区域;A second acquiring module 1104, configured to acquire a 3D object model, where the 3D object model includes a target area;
渲染模块1106,用于基于所述3D对象模型的深度信息以及所述3D对象模型上除所述目标区域以外的非目标区域的颜色信息,将所述3D对象模型渲染到所述目标图像上以获得渲染结果,其中,所述目标对象显示在所述目标区域内。A rendering module 1106, configured to render the 3D object model onto the target image based on the depth information of the 3D object model and the color information of non-target areas other than the target area on the 3D object model to A rendering result is obtained, wherein the target object is displayed in the target area.
在一些实施例中,第一获取模块1102获取的目标图像可以是用户实时通过摄像头拍摄获取的目标图像,也可以是预先拍摄好的视频或者图像中获取的目标图像。获取目标图像之前,首先确定图像中是否包括目标对象,获取包括目标对象的目标图像,针对于不包括目标对象的图像,可以不进行特效渲染处理。目标对象可以是人脸、人手或人脚等身体部位,也可以是小动物,还可以是建筑物等等,用户可以根据实际的拍摄场景修改目标对象的类型。In some embodiments, the target image acquired by the first acquiring module 1102 may be the target image captured by the user through a camera in real time, or the target image captured in a pre-shot video or image. Before acquiring the target image, it is first determined whether the target object is included in the image, and the target image including the target object is acquired, and special effect rendering processing may not be performed for images not including the target object. The target object can be a body part such as a human face, a human hand or a human foot, or a small animal, or a building, etc. The user can modify the type of the target object according to the actual shooting scene.
在一些实施例中,第二获取模块1104获取的3D对象模型为预先制作好的虚拟的3D对象模型,3D对象模型中包括目标区域。其中,3D对象模型类型可以是一个小动物,如一头小猪、一只小猴子等等,也可以是物体,如一面镜子、一幅画等等,可以根据实际场景所需要的效果获取预先制作好的不同类型的3D对象模型。3D对象模型中包括的目标区域可以是该3D对象模型的一个身体部位,例如,3D对象模型是一头小猪的时候,目标区域可以是小猪的脸、小猪的手或者小猪的脚等;3D对象模型中包括的目标区域也可以是该3D对象模型中的一个指定范围,例如,3D对象模型是一幅画的时候,目标区域可以是这幅画的中间位置的矩形区域,也可以是画中的某个人或物所包括的范围。当然,可以根据实际场景所需要的效果设置3D对象模型中的目标区域。In some embodiments, the 3D object model acquired by the second acquiring module 1104 is a prefabricated virtual 3D object model, and the 3D object model includes the target area. Among them, the type of 3D object model can be a small animal, such as a piglet, a small monkey, etc., or an object, such as a mirror, a painting, etc., which can be pre-made according to the effect required by the actual scene. Nice different types of 3D object models. The target area included in the 3D object model may be a body part of the 3D object model, for example, when the 3D object model is a pig, the target area may be the face of the pig, the hands of the pig, or the feet of the pig, etc. ; The target area included in the 3D object model can also be a specified range in the 3D object model, for example, when the 3D object model is a picture, the target area can be a rectangular area in the middle of the picture, or It is the range covered by a certain person or thing in the painting. Of course, the target area in the 3D object model can be set according to the effect required by the actual scene.
在一些实施例中,第二获取模块1104还可以获取预先制作的虚拟的3D场景模型,其中,3D对象模型位于3D场景模型中。In some embodiments, the second acquiring module 1104 can also acquire a pre-fabricated virtual 3D scene model, where the 3D object model is located in the 3D scene model.
在一些实施例中,渲染模块1106将3D对象模型渲染在目标图像上时,可以是将目标区域的深度信息赋予目标图像上对应于目标区域的像素点,并将所述3D对象模型中非目标区域的深度信息和颜色信息赋予目标图像上对应于3D对象模型中非目标区域的像素点。最终呈现出来的效果即为将真实世界的一部分融入虚拟的3D对象模型中,将目标对象融入与3D对象模型中。In some embodiments, when the rendering module 1106 renders the 3D object model on the target image, it may assign the depth information of the target region to the pixels corresponding to the target region on the target image, and assign the non-target The depth information and color information of the region are assigned to the pixels corresponding to the non-target region in the 3D object model on the target image. The final effect is to integrate a part of the real world into the virtual 3D object model, and integrate the target object into the 3D object model.
在一些实施例中,渲染模块1106还将3D对象模型和3D场景模型都渲染在目标图像上。其中,3D对象模型位于3D场景模型中,基于上一个实施例中渲染3D对象模型的方式,还包括:在渲染相机的视角下,3D对象模型的目标区域将覆盖3D场景模型中的覆盖区域,在一个实施例中,在渲染3D场景模型之前剔除掉3D场景模型中的覆盖区域,再基于3D场景模型中非覆盖区域的深度信息和颜色信息渲染3D场景模型,使最终所显示的渲染效果为:目标对象显示在3D对象模型中,3D对象模型显示在3D场景模型中。在另一个实施例中,在渲染3D场景模型之前,并不剔除掉3D场景模型中的覆盖区域,而是基于3D场景模型的深度信息以及3D场景模型中非覆盖区域的颜色信息渲染3D场景模型,使最终所显示的渲染效果为:目标对象显示在3D对象模型中,3D对象模型显示在3D场景模型中。In some embodiments, the rendering module 1106 also renders both the 3D object model and the 3D scene model on the target image. Wherein, the 3D object model is located in the 3D scene model, based on the way of rendering the 3D object model in the previous embodiment, it also includes: under the viewing angle of the rendering camera, the target area of the 3D object model will cover the coverage area in the 3D scene model, In one embodiment, before rendering the 3D scene model, the coverage area in the 3D scene model is eliminated, and then the 3D scene model is rendered based on the depth information and color information of the non-coverage area in the 3D scene model, so that the final displayed rendering effect is : The target object is displayed in the 3D object model, and the 3D object model is displayed in the 3D scene model. In another embodiment, before rendering the 3D scene model, the coverage area in the 3D scene model is not removed, but the 3D scene model is rendered based on the depth information of the 3D scene model and the color information of the non-coverage area in the 3D scene model , so that the final displayed rendering effect is: the target object is displayed in the 3D object model, and the 3D object model is displayed in the 3D scene model.
在一些实施例中,本公开实施例中装置还包括动画设置模块,用于预先设置3D对象模型的运动动画和3D场景模型的运动动画。在一个具体的实施例中,在3D建模时,该动画设置模块先按照需求先将3D对象模型和/或3D场景模型进行分块,例如将3D对象模型分为多个子对象模型,将3D场景模型分为多个子场景模型;渲染模型之后,3D子对象模型和/或3D子场景模型沿着各自对应的运动路径进行运动,即可实现将目标对象融入运动的3D对象模型和/或3D场景模型中的效果。当然,根据实际应用的需 求,还可以只设置3D对象模型的运动动画,而不设置3D场景模型的运动动画;或者只设置3D场景模型的运动动画,而不设置3D对象模型的运动动画,以满足用户的个性化需求。由于每个块在运动的过程中,会存在目标3D子对象模型和/或3D子场景模型运动超出渲染结果图像的显示范围的情况。在一个具体的实施例中,上述动画设置模块还包括移动子单元,移动子单元用于针对于运动超出了渲染结果图像的显示范围的目标3D子对象模型和/或目标3D子场景模型,将目标3D子对象模型和/或目标3D子场景模型快速移动回初始位置,并维持目标3D子对象模型和/或目标3D子场景模型的运动。In some embodiments, the device in the embodiment of the present disclosure further includes an animation setting module, configured to preset motion animation of the 3D object model and motion animation of the 3D scene model. In a specific embodiment, during 3D modeling, the animation setting module first divides the 3D object model and/or 3D scene model into blocks according to requirements, for example, divides the 3D object model into multiple sub-object models, and divides the 3D object model into multiple sub-object models. The scene model is divided into multiple sub-scene models; after the model is rendered, the 3D sub-object model and/or the 3D sub-scene model moves along their corresponding motion paths, and the 3D object model and/or 3D object model that integrates the target object into the motion can be realized. Effects in the scene model. Of course, according to the actual application requirements, it is also possible to only set the motion animation of the 3D object model without setting the motion animation of the 3D scene model; or only set the motion animation of the 3D scene model without setting the motion animation of the 3D object model, so that Meet the individual needs of users. Since each block is in motion, the target 3D sub-object model and/or the 3D sub-scene model may move beyond the display range of the rendering result image. In a specific embodiment, the above-mentioned animation setting module further includes a moving subunit, which is used for moving the target 3D sub-object model and/or the target 3D sub-scene model whose motion exceeds the display range of the rendering result image. The target 3D sub-object model and/or the target 3D sub-scene model quickly moves back to the initial position and maintains the motion of the target 3D sub-object model and/or the target 3D sub-scene model.
在一些实施例中,第一获取模块1102还可以用于获取目标对象的朝向,本公开实施例中装置还包括第一调整模块,用于基于目标对象的朝向调整3D模型和/或3D场景模型的朝向。本公开实施例中装置还包括第二调整模块,用于基于目标对象的朝向调整渲染相机的角度参数。In some embodiments, the first acquisition module 1102 can also be used to acquire the orientation of the target object, and the device in the embodiment of the present disclosure further includes a first adjustment module, configured to adjust the 3D model and/or the 3D scene model based on the orientation of the target object orientation. The device in the embodiment of the present disclosure further includes a second adjustment module, configured to adjust the angle parameter of the rendering camera based on the orientation of the target object.
在一些实施例中,本公开实施例中装置还包括对应模块,用于当目标图像中包括多个目标对象的情况时,获取多个3D对象模型,3D对象模型与目标对象一一对应,其中,每个3D对象模型中均包括目标区域。In some embodiments, the device in the embodiment of the present disclosure further includes a corresponding module, which is used to obtain multiple 3D object models when the target image includes multiple target objects, and the 3D object models correspond to the target objects one by one, wherein , each 3D object model includes the target region.
在一些实施例中,本公开实施例中装置还包括识别模块,对目标图像进行识别,得到识别结果;进而基于识别结果从多个3D对象模型中分别确定每个目标对象对应的3D对象模型。In some embodiments, the device in the embodiment of the present disclosure further includes a recognition module, which recognizes the target image to obtain a recognition result; and further determines the 3D object model corresponding to each target object from multiple 3D object models based on the recognition result.
在一些实施例中,识别模块可以获取目标对象的属性信息,根据目标对象的属性信息,获取与目标对象的属性信息匹配的3D对象模型。In some embodiments, the recognition module may obtain attribute information of the target object, and obtain a 3D object model matching the attribute information of the target object according to the attribute information of the target object.
在一些实施例中,还可以根据识别模块获取的目标对象的属性信息,确定3D场景模型的属性信息,基于3D场景模型中除上述覆盖区域以外的非覆盖区域的深度信息和颜色信息。以及3D场景模型的属性信息渲染3D场景模型。In some embodiments, the attribute information of the 3D scene model can also be determined according to the attribute information of the target object acquired by the identification module, based on the depth information and color information of the non-coverage area in the 3D scene model except the above-mentioned coverage area. and attribute information of the 3D scene model to render the 3D scene model.
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本公开方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。As for the device embodiment, since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment. The device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. It can be understood and implemented by those skilled in the art without creative effort.
本公开特效渲染装置的实施例可以应用在计算机设备上,例如服务器或终端设备。在一个实施例中,计算机设备上还包括摄像头;其中,摄像头用于获取用户实时拍摄到的目标图像。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图12所示,为本公开实施例特效渲染装置所在计算机设备的一种硬件结构图,除了图12所示的处理器1210、内存1230、网络接口1220、以及非易失性存储器1240之外,实施例中特效渲染装置1231所在的服务器或电子设备,通常根据该计算机设备的实际功能,还可以包括其他硬件,对此不再赘述。Embodiments of the special effect rendering apparatus of the present disclosure can be applied to computer equipment, such as servers or terminal equipment. In one embodiment, the computer device further includes a camera; where the camera is used to acquire target images captured by the user in real time. The device embodiments can be implemented by software, or by hardware or a combination of software and hardware. Taking software implementation as an example, as a device in a logical sense, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory for execution by its processor. From the hardware level, as shown in Figure 12, it is a hardware structure diagram of the computer equipment where the special effect rendering device of the embodiment of the present disclosure is located, except for the processor 1210, memory 1230, network interface 1220, and non-easy In addition to the volatile memory 1240, the server or electronic device where the special effect rendering device 1231 is located in the embodiment may generally include other hardware according to the actual functions of the computer device, which will not be repeated here.
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述任一实施例所述的方法。计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存 储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。An embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method described in any one of the foregoing embodiments is implemented. Computer-readable media, including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information. Information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
上述对本公开特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。The foregoing describes specific embodiments of the present disclosure. Other implementations are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in an order different from that in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results.
本领域技术人员在考虑说明书及实践这里申请的发明后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未申请的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Other embodiments of the present disclosure will be readily apparent to those skilled in the art from consideration of the specification and practice of the invention claimed herein. The present disclosure is intended to cover any modification, use or adaptation of the present disclosure, which follow the general principles of the present disclosure and include common knowledge or conventional technical means in the technical field for which the present disclosure does not apply . The specification and examples are to be considered exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It should be understood that the present disclosure is not limited to the precise constructions which have been described above and shown in the drawings, and various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
以上所述仅为本公开的较佳实施例而已,并不用以限制本公开,凡在本公开的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本公开保护的范围之内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present disclosure shall be included in the present disclosure within the scope of protection.

Claims (15)

  1. 一种特效渲染方法,其特征在于,所述方法包括:A special effect rendering method, characterized in that the method comprises:
    获取目标图像,所述目标图像中包括目标对象;Acquiring a target image, where the target image includes a target object;
    获取3D对象模型,所述3D对象模型包括目标区域;obtaining a 3D object model, the 3D object model including the target area;
    基于所述3D对象模型的深度信息以及所述3D对象模型上除所述目标区域以外的非目标区域的颜色信息,将所述3D对象模型渲染到所述目标图像上以获得渲染结果图像,其中,所述目标对象显示在所述目标区域内。Rendering the 3D object model onto the target image to obtain a rendering result image based on the depth information of the 3D object model and the color information of non-target areas other than the target area on the 3D object model, wherein , the target object is displayed in the target area.
  2. 根据权利要求1所述方法,其特征在于,所述基于所述3D对象模型的深度信息以及所述3D对象模型上除所述目标区域以外的非目标区域的颜色信息,将所述3D对象模型渲染到所述目标图像上,包括:The method according to claim 1, wherein, based on the depth information of the 3D object model and the color information of non-target areas other than the target area on the 3D object model, the 3D object model is Render onto the target image, including:
    将所述目标区域的深度信息赋予所述目标图像上对应于所述目标区域的像素点,并assigning depth information of the target area to pixels corresponding to the target area on the target image, and
    将所述3D对象模型中所述非目标区域的深度信息和颜色信息赋予目标图像上对应于3D对象模型中所述非目标区域的像素点。assigning the depth information and color information of the non-target area in the 3D object model to the pixel points corresponding to the non-target area in the 3D object model on the target image.
  3. 根据权利要求1所述方法,其特征在于,所述3D对象模型位于预先获取的3D场景模型中,所述方法还包括:The method according to claim 1, wherein the 3D object model is located in a pre-acquired 3D scene model, and the method further comprises:
    基于所述3D场景模型中除覆盖区域以外的非覆盖区域的深度信息和颜色信息,将所述3D场景模型渲染到所述目标图像上以获得所述渲染结果图像,其中,所述覆盖区域为渲染相机视角下被所述3D对象模型中目标区域覆盖的区域。Render the 3D scene model onto the target image to obtain the rendering result image based on the depth information and color information of the non-coverage area in the 3D scene model except the coverage area, wherein the coverage area is The area covered by the target area in the 3D object model from the perspective of the camera is rendered.
  4. 根据权利要求3所述方法,其特征在于,According to the described method of claim 3, it is characterized in that,
    所述3D对象模型包括多个子对象模型,至少一个所述子对象模型沿着对应的运动路径进行运动;和/或The 3D object model includes a plurality of sub-object models, at least one of which moves along a corresponding motion path; and/or
    所述3D场景模型包括多个子场景模型,至少一个所述子场景模型沿着对应的运动路径进行运动。The 3D scene model includes a plurality of sub-scene models, and at least one of the sub-scene models moves along a corresponding motion path.
  5. 根据权利要求4所述方法,其特征在于,所述方法还包括:The method according to claim 4, wherein the method further comprises:
    获取所述渲染结果图像的显示范围,Obtain the display range of the rendering result image,
    将运动超出所述显示范围的所述子对象模型渲染在所述子对象模型对应的运动路径上的起始位置。Rendering the sub-object model moving beyond the display range at the starting position on the motion path corresponding to the sub-object model.
  6. 根据权利要求4所述方法,其特征在于,所述方法还包括:The method according to claim 4, wherein the method further comprises:
    获取所述渲染结果图像的显示范围,Obtain the display range of the rendering result image,
    将运动超出所述显示范围的所述子场景模型渲染在所述子场景模型对应的运动路径上的起始位置。Render the sub-scene model whose motion exceeds the display range at the starting position on the motion path corresponding to the sub-scene model.
  7. 根据权利要求1或3所述方法,其特征在于,在渲染所述3D对象模型和/或3D场景模型之前,所述方法还包括:The method according to claim 1 or 3, wherein, before rendering the 3D object model and/or 3D scene model, the method further comprises:
    获取所述目标对象的朝向;Obtain the orientation of the target object;
    基于所述目标对象的朝向调整所述3D对象模型和/或所述3D场景模型在所述渲染结果图像中的朝向。Adjusting the orientation of the 3D object model and/or the 3D scene model in the rendering result image based on the orientation of the target object.
  8. 根据权利要求1或3所述方法,其特征在于,在渲染所述3D对象模型和/或3D场景模型之前,所述方法还包括:The method according to claim 1 or 3, wherein, before rendering the 3D object model and/or 3D scene model, the method further comprises:
    获取所述目标对象的朝向;Obtain the orientation of the target object;
    基于所述目标对象的朝向调整渲染相机的角度参数;其中,不同角度参数下所述3D对象模型和/或3D场景模型在所述渲染结果图像上显示的朝向不同。Adjusting the angle parameters of the rendering camera based on the orientation of the target object; where the orientations of the 3D object model and/or the 3D scene model displayed on the rendering result image are different under different angle parameters.
  9. 根据权利要求1至8任一项所述方法,其特征在于,在所述目标图像中包括多个目标对象的情况下,所述获取3D对象模型,包括:The method according to any one of claims 1 to 8, wherein, in the case where the target image includes a plurality of target objects, the acquiring a 3D object model includes:
    获取多个所述3D对象模型,所述3D对象模型与所述目标对象分别对应,每个3D对象模型均包括目标区域。A plurality of 3D object models are acquired, the 3D object models respectively correspond to the target objects, and each 3D object model includes a target area.
  10. 根据权利要求9所述方法,其特征在于,所述获取3D对象模型,包括:The method according to claim 9, wherein said acquiring a 3D object model comprises:
    对所述目标图像进行识别,得到识别结果;Recognizing the target image to obtain a recognition result;
    基于所述识别结果从多个所述3D对象模型中分别确定每个所述目标对象对应的3D对象模型。A 3D object model corresponding to each of the target objects is respectively determined from the plurality of 3D object models based on the recognition result.
  11. 根据权利要求10所述方法,其特征在于,所述识别结果包括所述目标对象的属性信息,所述3D对象模型与对应的目标对象的属性信息相匹配。The method according to claim 10, wherein the recognition result includes attribute information of the target object, and the 3D object model matches the corresponding attribute information of the target object.
  12. 根据权利要求3所述方法,其特征在于,所述方法还包括:The method according to claim 3, wherein the method further comprises:
    获取所述目标对象的属性信息;Obtain attribute information of the target object;
    根据所述目标对象的属性信息,确定所述3D场景模型的属性信息;determining the attribute information of the 3D scene model according to the attribute information of the target object;
    所述基于所述3D场景模型中所述非覆盖区域的深度信息和颜色信息,将所述3D场景模型渲染到所述目标图像上,包括:The rendering of the 3D scene model on the target image based on the depth information and color information of the non-covered area in the 3D scene model includes:
    基于所述3D场景模型中所述非覆盖区域的深度信息和颜色信息以及3D场景模型的属性信息,渲染所述3D场景模型。Rendering the 3D scene model based on the depth information and color information of the non-coverage area in the 3D scene model and the attribute information of the 3D scene model.
  13. 一种特效渲染装置,其特征在于,所述装置包括:A special effect rendering device, characterized in that the device comprises:
    第一获取模块,用于获取目标图像,所述目标图像中包括目标对象;A first acquiring module, configured to acquire a target image, where the target image includes a target object;
    第二获取模块,用于获取3D对象模型,所述3D对象模型包括目标区域;A second acquiring module, configured to acquire a 3D object model, the 3D object model including the target area;
    渲染模块,用于基于所述3D对象模型的深度信息以及所述3D对象模型上除所述目标区域以外的非目标区域的颜色信息,将所述3D对象模型渲染到所述目标图像上以获得渲染结果图像,其中,所述目标对象显示在所述目标区域内。A rendering module, configured to render the 3D object model onto the target image based on the depth information of the 3D object model and the color information of non-target areas on the 3D object model except the target area to obtain A resulting image is rendered, wherein the target object is displayed within the target area.
  14. 一种特效渲染设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现如权利要求1至12任一项所述的方法。A special effect rendering device, comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein, when the processor executes the computer program, the computer program described in any one of claims 1 to 12 is implemented. described method.
  15. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1至12中任一项所述方法中步骤。A computer-readable storage medium, on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the steps in the method of any one of claims 1 to 12 are implemented.
PCT/CN2022/134990 2022-01-30 2022-11-29 Special effect rendering WO2023142650A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210114061.5A CN114494556A (en) 2022-01-30 2022-01-30 Special effect rendering method, device and equipment and storage medium
CN202210114061.5 2022-01-30

Publications (1)

Publication Number Publication Date
WO2023142650A1 true WO2023142650A1 (en) 2023-08-03

Family

ID=81479226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/134990 WO2023142650A1 (en) 2022-01-30 2022-11-29 Special effect rendering

Country Status (2)

Country Link
CN (1) CN114494556A (en)
WO (1) WO2023142650A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494556A (en) * 2022-01-30 2022-05-13 北京大甜绵白糖科技有限公司 Special effect rendering method, device and equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150371447A1 (en) * 2014-06-20 2015-12-24 Datangle, Inc. Method and Apparatus for Providing Hybrid Reality Environment
CN109147037A (en) * 2018-08-16 2019-01-04 Oppo广东移动通信有限公司 Effect processing method, device and electronic equipment based on threedimensional model
CN109147023A (en) * 2018-07-27 2019-01-04 北京微播视界科技有限公司 Three-dimensional special efficacy generation method, device and electronic equipment based on face
CN112738420A (en) * 2020-12-29 2021-04-30 北京达佳互联信息技术有限公司 Special effect implementation method and device, electronic equipment and storage medium
CN113240692A (en) * 2021-06-30 2021-08-10 北京市商汤科技开发有限公司 Image processing method, device, equipment and storage medium
CN114494556A (en) * 2022-01-30 2022-05-13 北京大甜绵白糖科技有限公司 Special effect rendering method, device and equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150371447A1 (en) * 2014-06-20 2015-12-24 Datangle, Inc. Method and Apparatus for Providing Hybrid Reality Environment
CN109147023A (en) * 2018-07-27 2019-01-04 北京微播视界科技有限公司 Three-dimensional special efficacy generation method, device and electronic equipment based on face
CN109147037A (en) * 2018-08-16 2019-01-04 Oppo广东移动通信有限公司 Effect processing method, device and electronic equipment based on threedimensional model
CN112738420A (en) * 2020-12-29 2021-04-30 北京达佳互联信息技术有限公司 Special effect implementation method and device, electronic equipment and storage medium
CN113240692A (en) * 2021-06-30 2021-08-10 北京市商汤科技开发有限公司 Image processing method, device, equipment and storage medium
CN114494556A (en) * 2022-01-30 2022-05-13 北京大甜绵白糖科技有限公司 Special effect rendering method, device and equipment and storage medium

Also Published As

Publication number Publication date
CN114494556A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
US11977670B2 (en) Mixed reality system for context-aware virtual object rendering
JP6785282B2 (en) Live broadcasting method and equipment by avatar
US9910275B2 (en) Image processing for head mounted display devices
Craig Understanding augmented reality: Concepts and applications
CN108447043B (en) Image synthesis method, equipment and computer readable medium
CN114615486B (en) Method, system and computer readable storage medium for generating a composite stream
TW202013149A (en) Augmented reality image display method, device and equipment
CN110517355A (en) Environment for illuminating mixed reality object synthesizes
US11620780B2 (en) Multiple device sensor input based avatar
CN109242961A (en) A kind of face modeling method, apparatus, electronic equipment and computer-readable medium
US11204495B2 (en) Device and method for generating a model of an object with superposition image data in a virtual environment
WO2018095317A1 (en) Data processing method, device, and apparatus
CN116250014A (en) Cross-domain neural network for synthesizing images with false hairs combined with real images
WO2023142650A1 (en) Special effect rendering
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN107862718A (en) 4D holographic video method for catching
US20220114784A1 (en) Device and method for generating a model of an object with superposition image data in a virtual environment
CN106843790A (en) A kind of information display system and method
WO2021109764A1 (en) Image or video generation method and apparatus, computing device and computer-readable medium
JP6775669B2 (en) Information processing device
US20230050535A1 (en) Volumetric video from an image source
CN115550563A (en) Video processing method, video processing device, computer equipment and storage medium
CN111640199B (en) AR special effect data generation method and device
WO2017124871A1 (en) Method and apparatus for presenting multimedia information
JP7044846B2 (en) Information processing equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923442

Country of ref document: EP

Kind code of ref document: A1