WO2021184932A1 - 拟人化3d模型生成的方法和装置 - Google Patents

拟人化3d模型生成的方法和装置 Download PDF

Info

Publication number
WO2021184932A1
WO2021184932A1 PCT/CN2021/070703 CN2021070703W WO2021184932A1 WO 2021184932 A1 WO2021184932 A1 WO 2021184932A1 CN 2021070703 W CN2021070703 W CN 2021070703W WO 2021184932 A1 WO2021184932 A1 WO 2021184932A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
virtual
pattern
anthropomorphic
personification
Prior art date
Application number
PCT/CN2021/070703
Other languages
English (en)
French (fr)
Inventor
刘建滨
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021184932A1 publication Critical patent/WO2021184932A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • This application relates to the field of virtual reality, and more specifically, to a method and device for generating an anthropomorphic 3D model.
  • anthropomorphic animations on real objects generally adopts 3D model production.
  • Artists perform 3D modeling of real objects in advance, and then make corresponding anthropomorphic virtual images (such as virtual facial features, virtual body models) on the 3D model. ), and then do texture mapping to form an animation.
  • anthropomorphic virtual images such as virtual facial features, virtual body models
  • texture mapping to form an animation.
  • the design of facial expressions and body movements is complicated. Even simple animations require a long period of professional adjustment.
  • the effect of anthropomorphic avatars on 3D models is not good, and users subjectively anthropomorphize avatars on 3D models. The experience is not good, the efficiency and quality are relatively low, and, for different objects, the corresponding 3D model needs to be recreated.
  • the facial features and body movements need to be re-made with the new 3D model, which is more complicated and adaptable. Poor sex.
  • the present application provides a method and device for generating an anthropomorphic 3D model, which can quickly realize the augmented reality effect of anthropomorphic objects, and reduce the complexity of facial expressions and movement design of limbs.
  • a better personified virtual image can be obtained in subjective experience, and the effect of the personified image on the 3D model can be enhanced.
  • a method for generating an anthropomorphic 3D model is provided.
  • the execution subject of the method can be a terminal device that can display virtual images or animations, such as mobile phones, AR devices, personal digital processing devices, VR devices, etc. It can be a chip applied to the terminal device.
  • the method includes: obtaining a 3D model of a target object; obtaining a personification pattern of the target object; determining the position and projection size of the personification pattern on the 3D model according to the shape characteristics of the target object;
  • the position and projection size on the 3D model are used to render the anthropomorphic pattern on the 3D model to generate a anthropomorphic 3D model.
  • the shape feature of an object can be understood as the number of pixels in the object image.
  • the length, width, and height of the object can be determined, or any two or three parameters of length, width, and height can be determined.
  • the shape feature of the object can also be understood as the actual length, width, and height of the object in any two parameters or the ratio of three parameters.
  • the first aspect provides the method of generating anthropomorphic 3D model.
  • the position of the anthropomorphic pattern on the 3D model and the projection size of the anthropomorphic facial features and limbs on the 3D model are determined according to the appearance characteristics of the real object, which can make the anthropomorphic
  • the presentation of chemical patterns on the 3D model is more accurate and vivid.
  • the anthropomorphic pattern is rendered to the corresponding 3D model of the object by projection, which avoids the problem of poor effect when the anthropomorphic pattern is rendered on the 3D model due to dangling or embedded anthropomorphic patterns, and quickly realizes the anthropomorphism of the object AR effect reduces the complexity of facial expressions and body movement design.
  • the The rendering of the anthropomorphic pattern on the 3D model specifically includes: when rendering the anthropomorphic pattern, according to the size of the anthropomorphic pattern, determine the distance between the anthropomorphic pattern and the 3D model, as well as the anthropomorphic pattern and the virtual The distance between the projection points is such that the size of the projection surface of the anthropomorphic pattern on the projection surface of the 3D model is the same as the projection size determined according to the shape feature of the target object.
  • the size of the anthropomorphic pattern on the 3D model can be adjusted, which can be fast and accurate
  • the anthropomorphic image is projected on the 3D model according to a certain size, which improves the rendering efficiency and makes the presentation of the anthropomorphic pattern on the 3D model more accurate and vivid.
  • the anthropomorphic pattern may be virtual facial features or virtual limbs.
  • the projection size S of the anthropomorphic pattern on the projection surface of the 3D model satisfies the following conditions :
  • the size of the anthropomorphic pattern is W 1
  • the distance between the anthropomorphic pattern and the projection surface of the 3D model is X 2
  • the projection surface of the 3D model is the hexahedral bounding box of the 3D model and the anthropomorphic pattern.
  • the surface of is parallel to the surface, and the distance between the anthropomorphic pattern and the virtual projection point is X 1 .
  • the personification pattern includes:
  • Virtual facial features, and/or, virtual limbs are Virtual facial features, and/or, virtual limbs.
  • the number of the virtual facial features can be one or more, and the number of the virtual limbs can also be one. Or more.
  • the anthropomorphic pattern includes virtual facial features and/or virtual limbs, according to The shape feature of the target object determines the position and projection size of the anthropomorphic pattern on the 3D model, including:
  • the proportional relationship of the virtual facial features includes: the proportional relationship between the distance between the eyes and the top of the head and the length of the human head, the proportional relationship between the distance between the mouth and the top of the head and the length of the human head, the distance between the eyes and the width of the human head At least one of the proportional relationships between,
  • the proportional relationship of the virtual limbs includes: the proportional relationship between the distance from the shoulder to the top of the head and the height, the proportional relationship between the distance from the leg to the top of the head and the height, the proportional relationship between the length of the upper limbs and the height, and the length of the lower limbs At least one of the proportional relationships with height.
  • the proportional relationship of the virtual facial features of ordinary people in reality and/or the proportional relationship of the virtual limbs is determined, and then the virtual facial features and the virtual limbs are determined on the 3D model.
  • the method further includes:
  • the position of the virtual decoration on the 3D model on the 3D model is determined.
  • the virtual decoration is manually selected or automatically selected from the virtual decoration resource, and the virtual decoration resource includes a plurality of virtual decorations.
  • the virtual decoration resources may include virtual facial features, decorations on virtual limbs, and the like.
  • the decorations on the virtual facial features and virtual limbs may be hats, scarves, shoes, clothes or other decorations, for example.
  • the virtual hat is located above the virtual eyes
  • the virtual scarf is located below the virtual head, and so on.
  • the virtual decoration resource is pre-stored.
  • obtaining a 3D model of the target object specifically includes: calling the Any one of the 3D model, the external call of the 3D model, or the generation of the 3D model.
  • obtaining the personification pattern of the target object specifically includes: according to the The target object is manually selected from the pre-stored anthropomorphic pattern resources or automatically selected, and the anthropomorphic pattern resource includes a plurality of anthropomorphic patterns.
  • the 3D model and the anthropomorphic pattern on the 3D model are decoupled. For different 3D models, there is no need to recreate the virtual facial features and virtual limbs. Can reduce design cost and complexity.
  • the format of the anthropomorphic pattern includes an anthropomorphic picture or an anthropomorphic image At least one of the interchange format GIF animations.
  • the method further includes: using a camera to identify and/or locate The target object.
  • a camera is used to take pictures and scan the object to realize the recognition and/or positioning of the target object.
  • a device for generating an anthropomorphic 3D model includes:
  • the processing unit is used to obtain the 3D model of the target object
  • the processing unit is also used to obtain the personification pattern of the target object
  • the processing unit is also used to determine the position and projection size of the anthropomorphic pattern on the 3D model according to the shape feature of the target object; wherein the shape feature of the object can be understood as the number of pixels in the object image, and according to the object image The number of pixels in the middle can determine the length, width, and height of the object, or you can determine the ratio of any two parameters or three parameters of the length, width, and height. Alternatively, the shape feature of the object can also be understood as the actual length, width, and height of the object in any two parameters or the ratio of three parameters.
  • the processing unit is also used for rendering the personification pattern on the 3D model according to the position and projection size of the personification pattern on the 3D model to generate a personification 3D model.
  • the second aspect provides a device for generating an anthropomorphic 3D model.
  • the position of the anthropomorphic pattern on the 3D model and the projection size of the anthropomorphic facial features and limbs on the 3D model are determined according to the appearance characteristics of the real object, which can make the anthropomorphic
  • the presentation of chemical patterns on the 3D model is more accurate and vivid.
  • the anthropomorphic pattern is rendered to the corresponding 3D model of the object by projection, which avoids the problem of poor rendering effect of the anthropomorphic pattern on the 3D model due to dangling or embedded anthropomorphic patterns, and quickly realizes the anthropomorphism of the object AR effect reduces the complexity of facial expressions and body movement design.
  • the processing unit is specifically configured to: when rendering the anthropomorphic pattern, determine the anthropomorphic pattern and the 3D according to the size of the anthropomorphic pattern.
  • the distance between the models and the distance between the anthropomorphic pattern and the virtual projection point so that the projection surface size of the anthropomorphic pattern on the projection surface of the 3D model and the projection size determined according to the shape characteristics of the target object same.
  • the anthropomorphic pattern may be virtual facial features or virtual limbs.
  • the size of the anthropomorphic pattern on the 3D model can be adjusted, and the personification can be quickly and accurately
  • the anthropomorphic image is projected on the 3D model according to the determined size, which improves the rendering efficiency and makes the presentation of the anthropomorphic pattern on the 3D model more accurate and vivid.
  • the projection size S of the anthropomorphic pattern on the projection surface of the 3D model satisfies the following condition:
  • the size of the anthropomorphic pattern is W 1
  • the distance between the anthropomorphic pattern and the projection surface of the 3D model is X 2
  • the projection surface of the 3D model is the hexahedral bounding box of the 3D model and the anthropomorphic pattern.
  • the surface of is parallel to the surface, and the distance between the anthropomorphic pattern and the virtual projection point is X 1 .
  • the personification pattern includes: virtual facial features, and/or virtual Limbs.
  • the number of the virtual facial features can be one or more, and the number of the virtual limbs can also be one. Or more.
  • the processing unit is specifically configured to:
  • the proportional relationship of the virtual facial features includes: the proportional relationship between the distance between the eyes and the top of the head and the length of the human head, the proportional relationship between the distance between the mouth and the top of the head and the length of the human head, the distance between the eyes and the width of the human head At least one of the proportional relationships between,
  • the proportional relationship of the virtual limbs includes: the proportional relationship between the distance from the shoulder to the top of the head and the height, the proportional relationship between the distance from the leg to the top of the head and the height, the proportional relationship between the length of the upper limbs and the height, and the length of the lower limbs At least one of the proportional relationships with height.
  • the proportional relationship of the virtual facial features of ordinary people and/or the proportional relationship of the virtual limbs determine the virtual facial features and the virtual limbs on the 3D model The position of, you can get a better anthropomorphic virtual image in subjective experience, and can make the presentation of the anthropomorphic pattern on the 3D model more accurate.
  • the processing unit is further configured to perform according to the virtual facial features and/or The position of the virtual limbs on the 3D model determines the position of the virtual decoration on the 3D model on the 3D model.
  • the virtual decoration is manually selected or automatically selected from the virtual decoration resource, and the virtual decoration resource includes a plurality of virtual decorations.
  • the virtual decoration resources may include virtual facial features, decorations on virtual limbs, etc.
  • the decorations may be hats, scarves, shoes, clothes, or other decorations, for example.
  • the virtual hat is located above the virtual eyes
  • the virtual scarf is located below the virtual head, and so on.
  • the virtual decoration resource is pre-stored.
  • the processing unit is specifically configured to: call the 3D model locally, and externally Call the 3D model or generate the 3D model.
  • the processing unit is specifically configured to: according to the target object, from the pre-stored
  • the anthropomorphic pattern is manually selected or automatically selected from the anthropomorphic pattern resources, and the anthropomorphic pattern resource includes a plurality of anthropomorphic patterns.
  • the 3D model and the anthropomorphic pattern on the 3D model are decoupled. For different 3D models, there is no need to recreate the virtual facial features and limbs. Can reduce design cost and complexity.
  • the format of the anthropomorphic pattern includes an anthropomorphic picture or an anthropomorphic image At least one of the interchange format GIF animations.
  • the processing unit is further configured to: be further configured to use the camera to recognize And/or locate the target object.
  • a camera is used to take pictures and scan the object to realize the recognition and/or positioning of the target object.
  • the device may further include an object recognition and positioning service module, such as ,
  • the object recognition and positioning service module can be a camera module, etc.
  • the object recognition and positioning service module is used to identify and locate objects (real objects), and output the posture of the real objects with 6 degrees of freedom.
  • the device may further include a camera device, for example, a camera device It can be a camera, etc.
  • the device is a terminal device (such as a mobile phone, etc.), or It is other VR equipment (such as VR glasses, etc.), AR equipment, or, the device can also be a wearable device, a personal digital processing PDA, or other devices that can display virtual images or animations.
  • a communication device which includes a unit for executing the steps in the above first aspect or any possible implementation of the first aspect.
  • a communication device in a fourth aspect, includes at least one processor and a memory, and the at least one processor is configured to execute the above first aspect or the method in any possible implementation manner of the first aspect.
  • a communication device which includes at least one processor and an interface circuit, and the at least one processor is configured to execute the above first aspect or the method in any possible implementation of the first aspect.
  • a terminal device in a sixth aspect, includes the communication device provided in the foregoing second aspect, or the terminal device includes the communication device provided in the foregoing third aspect, or the terminal device includes the foregoing fourth aspect ⁇ Communication device.
  • a computer program product includes a computer program.
  • the computer program is executed by a processor, the computer program is used to execute the method in the first aspect or any possible implementation of the first aspect.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed, it is used to execute the first aspect or any possible implementation of the first aspect Methods.
  • a chip in a ninth aspect, includes a processor, configured to call and run a computer program from a memory, so that a communication device installed with the chip executes the first aspect or any possible implementation manner of the first aspect In the method.
  • the method and device for generating an anthropomorphic 3D model provided in this application, the 3D model and the anthropomorphic pattern on the 3D model are decoupled, and for different 3D models, there is no need to recreate virtual facial features and limbs.
  • the position of the anthropomorphic pattern on the 3D model and the projection size of the anthropomorphic facial features and limbs on the 3D model are determined according to the appearance characteristics of the real object, which can make the presentation of the anthropomorphic pattern on the 3D model more accurate and vivid.
  • the anthropomorphic pattern is rendered to the corresponding 3D model of the object by projection, which avoids the problem of poor effect when the anthropomorphic pattern is rendered on the 3D model due to dangling or embedded anthropomorphic patterns, and quickly realizes the anthropomorphism of the object AR effect reduces the complexity of facial expressions and body movement design.
  • Figure 1 is a schematic diagram of the shape of an object drawn by an artist.
  • Figure 2 is a schematic diagram of the animation finally produced by the artist on the 3D model.
  • Fig. 3 is a schematic diagram of an example application scenario of the present application.
  • Fig. 4 is a schematic diagram of another example application scenario of the present application.
  • FIG. 5 is a schematic flowchart of an example of a method for generating a personification 3D model provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of an example of a hexahedral bounding box in an example of the present application.
  • FIG. 7 is a schematic flowchart of another example of a method for generating a personification 3D model provided by an embodiment of the present application.
  • FIG. 8 is a schematic top view when an anthropomorphic pattern is projected in an example of the embodiment of the present application.
  • FIG. 9 is a schematic diagram of an example of the effect of projecting a virtual facial features image onto a 3D model in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an example of the positions of the eyes and the mouth on the 3D model in the embodiment of the present application.
  • FIG. 11 is a schematic diagram of an example of the positions of shoulders and legs on a 3D model in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of the effect of an example of virtual facial features after projection in an embodiment of the present application.
  • Fig. 13 is a schematic diagram of an example of a user selecting a virtual facial expression in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of the effect after virtual facial features projection in another example in the embodiment of the present application.
  • Fig. 15 is a schematic diagram of an example of a user selecting virtual limbs in an embodiment of the present application.
  • FIG. 16 is a schematic diagram of another example of virtual facial features and effects after virtual decoration projection in an embodiment of the present application.
  • Fig. 17 is a schematic diagram of an example of a user selecting a virtual decoration in an embodiment of the present application.
  • FIG. 18 is a schematic block diagram of a communication device provided by an embodiment of the present application.
  • FIG. 19 is a schematic block diagram of another example of a communication device according to an embodiment of the present application.
  • FIG. 20 is a schematic block diagram of a terminal device provided by an embodiment of the present application.
  • anthropomorphic animations on objects generally uses 3D model production. Artists perform 3D modeling of real objects in advance, and then make corresponding facial features and body models on the 3D model, and then make texture mapping to form an animation.
  • Figure 1 shows a schematic diagram of the shape of an object drawn by an artist. Then, based on the original painting, the artist creates a 3D model of the object through modeling software, such as Maya, 3Dmax or Blender, and then designs the texture map and pastes it on the 3D model. The facial features and limbs required for anthropomorphic objects are also directly created in the entire 3D modeling. Finally, add bones to the facial features in the 3D model, design actions, adjust skins, modify weights, etc., and finally form an animation. For example, Figure 2 shows a schematic diagram of the final animation.
  • modeling software such as Maya, 3Dmax or Blender
  • this application provides a method for generating an anthropomorphic 3D model.
  • the 3D model and the anthropomorphic pattern on the 3D model are decoupled, that is, for different 3D models, there is no need to recreate the virtual facial features and limbs. Wait.
  • the position of the anthropomorphic pattern on the 3D model and the projection size of the anthropomorphic facial features and limbs on the 3D model are determined according to the actual size of the 3D object, and a good anthropomorphic virtual image can be obtained in subjective experience.
  • the anthropomorphic pattern is projected onto the corresponding 3D model of the object by projection, which avoids the problem of poorly generated anthropomorphic patterns on the 3D model due to dangling or embedded anthropomorphic patterns, and quickly realizes the anthropomorphization of objects.
  • Augmented reality (AR) effects reduce the complexity of facial expressions and body motion design.
  • Fig. 3 is a schematic diagram of an example application scenario of this application.
  • a terminal device such as a mobile phone
  • a anthropomorphic pattern such as virtual facial features and limbs, etc.
  • the virtual facial features can realize lip synchronization and facial expression drive based on voice or text, and virtual limbs can also be driven after being understood by voice or text.
  • the perfect combination of real objects with virtual facial features and limbs allows interaction between multiple virtual images (also called anthropomorphic patterns) and between virtual images and other real objects to enhance the playability of the application.
  • Fig. 4 is a schematic diagram of an example application scenario of this application.
  • a virtual reality (VR) device such as VR glasses, etc.
  • anthropomorphic patterns such as virtual facial features and limbs, etc.
  • VR virtual reality
  • the perfect combination of real objects with virtual facial features and limbs allows interaction between multiple avatars (also called anthropomorphic patterns) and between avatars and other real objects to improve the playability of the application.
  • FIG. 3 and FIG. 4 should not cause any limitation to the application scenarios of the embodiments of the present application.
  • this application can also be applied to other scenes, such as the process of 3D animation production.
  • the method for generating the anthropomorphic 3D model provided in the present application will be described below with reference to FIG. 5. It should be understood that the execution subject of the method for generating anthropomorphic 3D models provided in this application can be a terminal device (such as a mobile phone, etc.), or other VR devices (such as VR glasses, etc.), an AR device, or alternatively, Wearable devices, personal digital assistant (PDA) and other devices that can display virtual images or animations.
  • a terminal device such as a mobile phone, etc.
  • VR devices such as VR glasses, etc.
  • AR device such as VR glasses, etc.
  • wearable devices such as wearable devices
  • PDA personal digital assistant
  • the embodiments of the application are not limited here.
  • the terminal equipment in the embodiments of the present application may refer to user equipment, access terminals, user units, user stations, mobile stations, mobile stations, remote stations, remote terminals, mobile devices, user terminals, terminals, wireless Communication equipment, user agent or user device. Or, it can also be a cellular phone with the function of displaying 3D animation, a cordless phone, a Session Initiation Protocol (SIP) phone, a wireless local loop (Wireless Local Loop, WLL) station, a handheld device with a wireless communication function, a computing device Devices, in-vehicle devices, wearable devices, terminal devices in the 5G network, terminal devices in other future communication systems, etc., which are not limited in the embodiment of the present application.
  • SIP Session Initiation Protocol
  • WLL Wireless Local Loop
  • the method 200 for generating an anthropomorphic 3D model shown in FIG. 5 may include step S210 to step S240. Each step in the method 200 will be described in detail below with reference to FIG. 5.
  • S220 Acquire a personification pattern of the target object.
  • S230 Determine the position and projection size of the personification pattern on the 3D model according to the shape feature of the target object.
  • the anthropomorphic pattern is rendered on the 3D model to generate an anthropomorphic 3D model.
  • the terminal device may first obtain a 3D model of the target object (referred to as "object” in the following description).
  • object can be real objects that exist in reality, such as any real objects such as beverage bottles, vases, and tables.
  • the terminal device may obtain the 3D model of the body through local calls, external calls, or self-generated methods.
  • Local calling can be understood as the 3D models corresponding to different objects have been stored on the terminal device, and the terminal device can select the 3D model corresponding to the object from the multiple stored 3D models according to the type and size of the object.
  • External call can be understood as: the 3D models corresponding to different objects have been stored on the external device (for example, on the server or other devices), and the terminal device can select from multiple 3D models stored in the external device according to the type and size of the object. The 3D model corresponding to the object, and the selected 3D model is obtained.
  • the 3D model generated by the terminal device itself can be understood as the terminal device uses modeling software, such as Maya, 3Dmax, or Blender, to generate a 3D model corresponding to a real object according to the type and size of the object.
  • the type, size, etc. can be referred to as the shape feature of the object.
  • the terminal device can obtain the 3D model corresponding to the object according to the shape feature of the object. It should be understood that, in the embodiment of the present application, the terminal device may also obtain the 3D model corresponding to the object in other ways, which is not limited in the embodiment of the present application.
  • the terminal device may also recognize and/or locate the object by using a camera device.
  • the camera device here may be, for example, a camera, a video camera, etc., or may also be other types of image acquisition devices.
  • the terminal device first needs a camera to take pictures, scans, etc. of the displayed object, so as to realize positioning and recognize the real object (or can also be called 3D object), determine the position of the real object in the camera coordinate system, and detect and locate the real object through the 3D object.
  • the algorithm is implemented to output the 6 degree of freedom (6DoF) posture of the real object.
  • the 3D model of the real object can be placed in a position and posture that exactly matches the real object.
  • the VR device can load the 3D model of the real object and place it according to the 6DoF posture of the real object.
  • a 3D digital model of the real object is in the same position in the camera coordinate system.
  • each real object can be surrounded by a hexahedral box.
  • This hexahedral box can be understood as the hexahedral bounding box of the real object, and the hexahedral bounding box can also be understood as a limit box or a bounding box.
  • the geometric center of a hexahedral box can understand the center of a real object. That is, the geometric center of the real object is determined as the center of the 3D model of the object.
  • the size (or size) of the real object is proportional to the size (or size) of the 3D model corresponding to the real object.
  • the size of the real object and the size of the 3D model corresponding to the real object may be proportionally reduced or enlarged.
  • the actual size of the 3D model corresponding to the real object displayed on the terminal device can be a door with a length of 0.2 meters and a width of 0.15 meters.
  • the terminal device The marked size on the 3D model corresponding to the real object shown above can be a door with a length of 2 meters and a width of 1.5 meters. That is, the marked size on the 3D model corresponding to the real object displayed on the terminal device may be the same as the size of the real object.
  • the posture of the real object is the same as the posture of the 3D model corresponding to the real object displayed on the terminal device.
  • the personification pattern of the object can be obtained.
  • the application in the terminal device may automatically or manually select the 2D personification pattern.
  • the application on the terminal device can automatically load multiple available facial expressions or multiple available virtual limbs to display to the user on the device.
  • the user can select one of the facial facial expressions or virtual limbs to act on the 3D model.
  • the terminal device determines multiple available facial expressions or multiple virtual limbs in combination with the type, size, and posture of the recognized real object, and presents them to the user for selection.
  • the user can choose according to his own preferences, 3D model size, For posture, one of the facial expressions or virtual limbs is selected to act on the 3D model corresponding to the real object.
  • the type of real objects can be understood as specific types of real objects, such as bottles, tables, books, and other types.
  • the terminal device can pre-store a table that includes different anthropomorphic patterns corresponding to different types of real objects, and different poses of the same real object can also correspond to different anthropomorphic patterns, etc., in determining the reality
  • the object, or the pose of the real object can present multiple anthropomorphic patterns corresponding to the real object to the user for selection.
  • the terminal device can automatically determine the facial expressions or virtual limbs acting on the 3D model according to the type, size, and posture of the real object.
  • the terminal device may pre-store a table that includes different anthropomorphic patterns corresponding to different types of real objects.
  • different postures of the same real object can also correspond to different anthropomorphic patterns.
  • the terminal device can automatically determine the anthropomorphic pattern acting on the 3D model of the real object in combination with the type, size, and posture of the recognized real object.
  • the terminal device can also automatically determine the anthropomorphic pattern to act on the 3D model of the real object in combination with the user's hobby or the user's previous selection.
  • the terminal device can use the anthropomorphic pattern previously selected by the user to act on the 3D model of the same real object as the anthropomorphic pattern to be used this time, or from the user Among the multiple anthropomorphic patterns previously selected to act on the same 3D model of the real object, the most frequently used anthropomorphic pattern is determined to be the anthropomorphic pattern that needs to be used this time.
  • the embodiment of the present application does not limit the process of the terminal device automatically determining the anthropomorphic pattern acting on the 3D model.
  • a large number of personification pattern resources may be predefined and stored in the terminal device or on the server, and the personification pattern resources include multiple or multiple kinds of personification patterns.
  • Anthropomorphic patterns can include virtual facial features, virtual limbs, and so on.
  • a large number of virtual decoration resources may be predefined and stored in the terminal device or on the server.
  • the virtual decoration resources may include virtual facial features, decorations on virtual limbs, etc.
  • the decorations may be hats, scarves, etc. Shoes, clothes or other decorations, etc.
  • the anthropomorphic pattern resource may include a virtual decoration resource.
  • a suitable anthropomorphic pattern is directly selected from the pre-stored anthropomorphic pattern resources, and a suitable virtual decoration is selected from the pre-stored virtual decoration resources and added to the 3D model corresponding to the real object. That is, the 3D model and the anthropomorphic pattern on the 3D model are decoupled. For different 3D models, there is no need to recreate the virtual facial features and limbs. Can reduce design cost and complexity.
  • the format of the anthropomorphic pattern may include a anthropomorphic picture or a anthropomorphic image exchange format (graphics interchange format, GIF) animation. It should be understood that, in the embodiment of the present application, the format of the personification pattern may also include other formats, which is not limited in the present application.
  • the position and projection size of the anthropomorphic pattern on the 3D model can be determined according to the shape characteristics of the object.
  • the shape feature of an object can be understood as the number of pixels in the image of the object. According to the number of pixels in the image of the object, the length, width, and height of the object can be determined, or any length, width, and height can be determined. The ratio of two or three parameters. Alternatively, the shape feature of the object can also be understood as the actual length, width, and height of the object in any two parameters or the ratio of three parameters.
  • the position of the anthropomorphic pattern on the 3D model and the size on the projection surface of the 3D model are based on the 3D model Corresponding to the actual object's appearance characteristics are determined.
  • it can match and coordinate between the anthropomorphic pattern and the 3D model, making the presentation of the anthropomorphic pattern on the 3D model more accurate and vivid, and at the same time according to the facial features and limbs of the anthropomorphic animation
  • the calculation method of position and size can obtain a better anthropomorphic virtual image in subjective experience.
  • the projection surface of the 3D model can be understood as a surface parallel to the hexahedral bounding box of the 3D model and the plane of the anthropomorphic pattern. Since the hexahedral bounding box of the 3D model includes two planes parallel to the plane of the anthropomorphic pattern, the projection plane here can be a plane close to the anthropomorphic pattern and parallel to the plane of the anthropomorphic pattern.
  • the front surface of the hexahedral bounding box shown in FIG. 6 is the projection surface.
  • the size of the anthropomorphic pattern on the projection surface of the 3D model can be understood as the projection size of the anthropomorphic pattern on the plane parallel to the hexahedral bounding box of the 3D model and the plane of the anthropomorphic pattern.
  • the anthropomorphic pattern can be anthropomorphized according to the position of the anthropomorphic pattern on the 3D model and the projection size of the anthropomorphic pattern on the projection surface of the 3D model.
  • the pattern is rendered (or can also be called projection) onto the 3D model to generate an anthropomorphic 3D model. That is, using 2D images (anthropomorphic patterns) to render or project 3D objects so that the facial features presented in 2D images can perfectly fit on the 3D objects, avoiding the problem of poor facial features caused by hanging or embedded 3D objects. .
  • the anthropomorphic 3D model After the anthropomorphic 3D model is generated, it can be displayed to the user through the display or display device on the terminal device.
  • the method for generating an anthropomorphic 3D model provided in this application, the position of the anthropomorphic pattern on the 3D model and the projection size of the anthropomorphic facial features and limbs on the 3D model are determined according to the appearance characteristics of the real object, which can make the anthropomorphic
  • the presentation of the pattern on the 3D model is more accurate and vivid.
  • the anthropomorphic pattern is rendered to the corresponding 3D model of the object by projection, which avoids the problem of poor effect of the anthropomorphic pattern generated on the 3D model due to dangling or embedded anthropomorphic patterns, and quickly realizes the anthropomorphism of the object
  • the AR effect reduces the complexity of facial expressions and body motion design, and can make the presentation of anthropomorphic patterns on 3D models more accurate and vivid.
  • FIG. 7 is a schematic flowchart of a method for generating an anthropomorphic 3D model in some embodiments of the present application.
  • the method steps shown in FIG. 5 On the basis of, S240 in the method, according to the position and projection size of the anthropomorphic pattern on the 3D model, renders the anthropomorphic pattern onto the 3D model, including: S241.
  • steps S210, S220, and S230 shown in FIG. 7 reference may be made to the related descriptions of S210, S220, and S230 above. For the sake of brevity, details are not repeated here.
  • the virtual projection point can be understood as a virtual projection device: for example, it can be a virtual camera, from which the light emitted from the virtual projection point intersects the anthropomorphic pattern and extends to the 3D model, so that the anthropomorphic pattern can be projected on the projection surface of the 3D model . That is to use light projection to project the anthropomorphic pattern on the 3D model, and finally project the image on the 3D model.
  • the size of the anthropomorphic pattern can be understood as the size of the 2D anthropomorphic pattern itself automatically or manually selected by the user.
  • the distance between the anthropomorphic pattern and the 3D model can be understood as the distance between the anthropomorphic pattern and the projection surface of the 3D model.
  • the size of the anthropomorphic pattern can be adjusted on the 3D model, and the anthropomorphic image can be quickly and accurately adjusted according to The determined size is projected or rendered on the 3D model, which improves the rendering efficiency and makes the presentation of the anthropomorphic pattern on the 3D model more accurate and vivid.
  • FIG. 8 shows an example of a schematic top view when a personification pattern is rendered.
  • a virtual camera (or virtual projection camera) can be understood as a virtual projection point in a terminal device.
  • the anthropomorphic pattern is a virtual facial features image, and the projection camera emits light, where the virtual camera is away from the virtual facial features.
  • the distance of the image is X 1
  • the distance between the virtual facial features image and the projection surface of the 3D model is X 2
  • the size of the virtual facial features image is W 1 .
  • the light emitted by the virtual camera passes through any pixel position Pf on the virtual facial features picture, and a projection point Pt is obtained on the surface of the 3D model.
  • FIG. 9 is a schematic diagram of an example of the effect of projecting a virtual facial features image onto a 3D model in an embodiment of the application.
  • the background color of the picture can be set to transparent before projection.
  • the position of the entire virtual facial features image on the projection surface of the 3D model can also be adjusted.
  • the position of the anthropomorphic pattern on the 3D model is determined. Therefore, during the rendering process, the position of the anthropomorphic pattern can be adjusted so that the projection position of the anthropomorphic pattern on the 3D model and the determined anthropomorphic pattern are on the 3D model.
  • the location is the same. For example, in the example shown in Figure 8, the anthropomorphic pattern can be moved up or down, or the anthropomorphic pattern can be moved to the left or right, so as to adjust the projection of the anthropomorphic pattern on the 3D model. Location.
  • the projection size S of the anthropomorphic pattern on the projection surface of the 3D model satisfies the following formula (1):
  • the size of the anthropomorphic pattern is W 1
  • the distance between the anthropomorphic pattern and the projection surface of the 3D model is X 2
  • the distance between the anthropomorphic pattern and the virtual projection point is X 1 .
  • the projection size of the anthropomorphic pattern on the 3D model needs to be adapted to the size of the 3D model itself, so as to obtain a better anthropomorphic avatar in subjective experience. . Therefore, in the process of projection or rendering, the size of the anthropomorphic pattern on the projection surface of the 3D model needs to be adjusted so that it is compatible with the projection size previously determined according to the shape characteristics of the real object corresponding to the 3D model.
  • the position of the projection camera, the size of the virtual facial features image Sf, the distance between the projection camera and the facial features image X 1 and the distance between the facial features image and the 3D model X 2 these parameters all act on the 3D model.
  • the projection imaging size S, X 1 and X 2 can be adjusted according to the St size required by the application, so that the size of the anthropomorphic pattern on the projection surface of the 3D model meets the requirements. Adjusting the projection size of the anthropomorphic pattern on the 3D model through the above formula (1) can improve the accuracy and efficiency of adjusting the projection size of the anthropomorphic pattern on the 3D model, and is easy to implement.
  • the size of the anthropomorphic pattern itself is generally determined after the user selects or automatically selects the application.
  • the user selects the anthropomorphic image by himself or the application automatically selects the anthropomorphic image, in order to make the size of the anthropomorphic pattern on the projection surface of the 3D model fit the size of the 3D model itself, it is also possible This is achieved by adjusting the size W 1 of the anthropomorphic pattern itself.
  • the following will introduce the specific process of determining the position of the anthropomorphic pattern corresponding to the object on the 3D model and the projection size of the anthropomorphic pattern on the projection surface of the 3D model according to the shape characteristics of the object.
  • the anthropomorphic pattern may include virtual facial features, virtual limbs, etc.
  • the following will take the virtual facial features and virtual limbs as examples for description. It should also be understood that, in the embodiment of the present application, the number of virtual facial features may be one or more, and the number of virtual limbs may also be one or more.
  • the virtual facial features you can first select the positional relationship of the facial features in the face model according to the shape characteristics of the 3D object (ie, the real object).
  • the positional relationship of the facial features in the face model may include the proportional relationship between the distance between the eyes and the top of the head and the length of the human head, and may include the distance between the mouth and the top of the head and the length of the human head.
  • the proportional relationship may also include the proportional relationship between the distance between the eyes and the width of the human head, or the proportional relationship between the distance between other facial features and the crown of the head and the length of the human head. limit. It should be understood that the positional relationship of the five sense organs in the face model can also be expressed in other ways, which is not limited in this application.
  • the positional relationship of the facial features in the face model can be obtained by counting the positional relationship of the facial features in the face of a large number of ordinary people, or it can be manually set based on experience, which is not limited in this application.
  • the distance between the eyes of an average person and the top of the head is about 1/2 of the length of the head, and the distance between the mouth and the top of the head is about 7/9 of the length of the head.
  • the positional relationship of the facial features in the face model is determined. For example, when the aspect ratio of the 3D object meets the range condition A1, the ratio of the distance between the eyes and the top of the head to the length of the human head is B1; when the aspect ratio of the 3D object meets the range condition A2, the eyes and the top of the head The ratio between the distance and the length of the human head is B2.
  • the ratio of the distance between the mouth and the top of the head to the length of the human head is C1; when the aspect ratio of the 3D object meets the range condition A4, the ratio between the mouth and the top of the head The ratio of distance to head length is C1.
  • the ratio of the distance between the eyes and the top of the head to the length of the human head is B1, and the ratio of the distance between the mouth and the top of the head to the length of the human head is C1;
  • the ratio of the distance between the eyes and the top of the head to the length of the human head is B2, and the ratio of the distance between the mouth and the top of the head to the length of the human head is C2.
  • the position of the virtual facial features on the 3D model corresponding to the 3D object can be determined according to the shape characteristics of the 3D object and the positional relationship of the facial features in the face model.
  • the aspect ratio of a 3D object satisfies the range condition A2
  • the ratio of the distance between the eyes and the top of the head to the length of the human head is B2
  • the ratio of the distance between the mouth and the top of the head to the length of the human head is C2.
  • the height of the 3D model is Ho
  • Hmo the distance between the mouth and the top of the head on the 3D model
  • Ho the height of the 3D model
  • Ho the positions of the eyes and mouth on the 3D model
  • the projection size of the virtual facial features on the projection surface of the 3D model can also be determined according to the shape characteristics of the 3D object and the proportional relationship of the facial features in the face model.
  • the total width of the eyes of an ordinary person accounts for 3/5 of the total width of the face (represented by ⁇ ), and ⁇ can be determined according to application requirements (such as artistic design).
  • W 1 the width of the eyes on the facial features image selected by the user or automatically selected by the application
  • W 2 the width of the area where the virtual facial features are placed on the 3D model
  • the virtual projection camera position is placed at a distance from the facial features image to X 1
  • the distance between the facial features image and the projection surface of the six-sided bounding box of the 3D model is X 2
  • the size of the facial features image projected onto the projection surface of the 3D model is S, and the following formulas (2) and (3) are satisfied:
  • the size S of the facial features image on the projection surface of the 3D model can be determined.
  • the values of X 1 and X 2 can be adjusted so that the projection size of the facial features image on the projection surface of the 3D model is S.
  • the virtual limbs For the virtual limbs, first select the positional relationship (proportional relationship) of the limbs in the human body model according to the appearance characteristics of the 3D object (ie, the real object).
  • the position relationship of the limbs in the human body model may include the proportional relationship between the distance from the shoulder to the top of the head and the height, and may include the proportional relationship between the distance from the leg to the top of the head and the height.
  • the positional relationship of the limbs in the human body model can also be expressed in other ways, which is not limited in this application.
  • the positional relationship of the limbs in the human body model can also be obtained by counting the relationship between a large number of ordinary people's limbs and height, or can be set manually based on experience, which is not limited in this application. For example, the shoulder position of an average person is 1/7 of the height.
  • the positional relationship of the limbs in the human body model is selected. For example, when the aspect ratio of a 3D object satisfies the range condition A1, the ratio between the distance from the shoulder to the top of the head and the height is D1; when the aspect ratio of the 3D object satisfies the range condition A2, the distance from the shoulder to the top of the head is equal to The ratio between heights is D2.
  • the ratio between the distance from the leg to the top of the head and the height is E1; when the aspect ratio of the 3D object meets the range condition A3, the distance from the leg to the top of the head The ratio to height is E2.
  • the position of the virtual limbs on the 3D object can be determined according to the shape characteristics of the 3D object and the position relationship of the limbs in the human body model.
  • the aspect ratio of a 3D object satisfies the range condition A2, the ratio between the distance from the shoulder to the top of the head and the height is D2, and the ratio between the distance from the leg to the top of the head and the height is E1.
  • the height of the 3D object is Ho
  • the positions of the shoulders and legs on the 3D model are shown in Figure 11.
  • the proportions of the limbs in the human body model can also be selected according to the shape characteristics of the 3D object to determine the length of the virtual limbs.
  • the proportions of the limbs in the human body model may include the proportional relationship between the length of the upper limbs and the height, and may also include the proportional relationship between the length of the lower limbs and the height.
  • the proportions of the limbs in the human body model can also be expressed in other ways, which is not limited in this application.
  • the proportional relationship of the limbs in the human body model can be obtained by counting the relationship between a large number of ordinary people's limbs and height, or it can be set manually based on experience, which is not limited in this application. For example, the length of the upper limbs of an ordinary person is 1/3 of the height, and the length of the lower limbs is 3/5 of the height.
  • the length of the virtual limbs in the human body model can be determined according to the shape characteristics of the 3D object. For example, when the aspect ratio of the 3D object meets the range condition A1, the ratio of the length to the height of the upper limb is F1, and when the aspect ratio of the 3D object meets the range condition A2, the ratio of the length to the height of the upper limb is F2. For another example, when the aspect ratio of the 3D object satisfies the range condition B1, the ratio of the length to the height of the lower limbs is G1, and when the aspect ratio of the 3D object satisfies the range condition A2, the ratio of the length to the height of the lower limbs is G2.
  • the position of the virtual limbs on the 3D model corresponding to the 3D object can be determined according to the shape characteristics of the 3D object and the proportional relationship of the virtual limbs.
  • the position of the virtual limbs on the 3D object can also be determined according to the shape characteristics of the 3D object and the position of the virtual facial features on the 3D object.
  • the position of the eyes of an ordinary person is 1/2 of the length of the human head, and the position of the mouth is 7/9 of the length of the human head.
  • the shoulder position is 1/7 of the height.
  • the eyes are placed at 1/4 of the height of the 3D model, and the mouth is placed at 1/2 of the height of the 3D model.
  • the eyes are placed at 1/2 of the height of the 3D model, and the mouth is placed at 3/4 of the height of the 3D model.
  • the height of the virtual upper limb position is the same as or lower than the height of the eye position.
  • the length of the virtual upper limb can be 1/3 of the height of the object, and the virtual lower limb position is at the bottom of the object.
  • virtual decorations in addition to projecting anthropomorphic patterns (such as virtual facial features, virtual limbs, etc.) on the 3D model, virtual decorations can also be added to the virtual facial features and virtual limbs.
  • a large number of virtual decoration resources may be predefined and stored in the VR device or on the server, and the virtual decoration resources include multiple or multiple types of virtual decorations. Such as hats, scarves, clothes and other accessories.
  • the application can automatically or manually select 2D virtual decorations.
  • the position of the virtual decoration on the 3D model can be determined according to the position of the virtual facial features and/or the virtual limbs.
  • the virtual hat is located above the virtual eyes, and the distance between the lower edge of the virtual hat and the virtual eyes can also be determined according to the type of the virtual hat.
  • the size of the virtual ornament can be selected and adjusted according to the size of the 3D object.
  • the 3D model and the anthropomorphic pattern on the 3D model are decoupled.
  • the position of the anthropomorphic features and limbs on the 3D model and the size of the anthropomorphic features and limbs are determined according to the shape characteristics of the 3D object.
  • the anthropomorphic virtual image is better in experience, and can make the presentation of the anthropomorphic pattern on the 3D model more accurate.
  • FIG. 12 shows an example of the present application, which is a schematic diagram of the effect after the virtual facial features are projected.
  • the application can automatically select or manually select the 2D facial features animation as the facial features and expressions of the anthropomorphic animation.
  • the application can load the available facial expressions to display to the user, and the user can select one of them to act on the 3D model.
  • the user only needs to select a single GIF animation of facial features or more than two facial features of different shapes to alternate projection according to time.
  • FIG. 14 shows another example of the present application, which is a schematic diagram of the effect after the virtual facial features are projected.
  • the application can automatically select or manually select the 2D facial features animation as the facial features and expressions of the anthropomorphic animation.
  • the application can load the available facial expression presentation devices and display it to the user, and the user can select one of them to act on the 3D model.
  • the virtual limb is a part outside the 3D model of the real object, and the projection is a flat effect, the viewing effect from other angles is not good, therefore, the virtual limb generally adopts a three-dimensional model instead of a two-dimensional pattern.
  • the virtual limb model The texture can be colored according to the main color of the real object or the color selected by the user. Then determine the position and size of the virtual facial features pattern on the 3D model according to the appearance characteristics of the real object, and project the virtual facial features image on the 3D model by projection, then you can generate the virtual facial features displayed on the 3D model as shown in Figure 14 Effect.
  • the connection area between the virtual limb model and the 3D model is required.
  • the connecting end of the limb model and the 3D model can be connected by a spherical or ellipsoidal connection. Specifically, it is assumed that the coordinates of the connection point P a1 on the left side of the 3D model and the limb model are (X a1 , Y a1 , Z a1 ), and the coordinates of the connection point P a2 on the left side are (X a2 , Ya2 , Z a2 ).
  • FIG. 16 shows another example of the present application, which is a schematic diagram of the effect after the virtual facial features are projected.
  • Figure 16 shows a schematic diagram of the effect of adding ornamental models in addition to the virtual facial features and limbs.
  • the application can automatically select or manually select the 2D facial features animation and the desired accessory model by the user.
  • the application can load the available accessory model presentation device and display it to the user, and the user can select one of them to act on the 3D model.
  • the position of the virtual hat selected by the user is placed in the center of the top surface (TOP surface) of the hexahedral bounding box corresponding to the real object.
  • TOP surface top surface
  • the 3D model presents the effects of virtual facial features and accessories.
  • pre-set and pre-defined can be achieved by pre-saving corresponding codes, tables, or other methods that can be used to indicate related information in devices (for example, including terminals and network devices). To achieve, this application does not limit its specific implementation.
  • FIG. 18 shows a schematic block diagram of an apparatus 300 for generating an anthropomorphic 3D model according to an embodiment of the present application.
  • the apparatus 300 may correspond to the terminal device, VR device, AR device, near-eye display device, and 3D display device described in the above method 200.
  • User equipment with animation function, etc. It can also be a chip or component applied to terminal equipment, VR equipment, AR equipment, near-eye display equipment, user equipment with the function of displaying 3D animation, etc., and each module or unit in the device 300 is used to execute the above method 200. Each action or process performed.
  • the device 300 may include a processing unit 310 and a display unit 320.
  • the processing unit 310 is configured to obtain a 3D model of the target object.
  • the processing unit 310 is also used to obtain a personification pattern of the target object.
  • the processing unit 310 is further configured to determine the position and projection size of the anthropomorphic pattern on the 3D model according to the shape feature of the target object;
  • the processing unit 310 is also configured to render the anthropomorphic pattern on the 3D model according to the position and projection size of the anthropomorphic pattern on the 3D model to generate a anthropomorphic 3D model.
  • the display unit 320 is used to display the anthropomorphic 3D model to the user.
  • the device for generating an anthropomorphic 3D model provided by this application, the position of the anthropomorphic pattern on the 3D model and the projection size of the anthropomorphic facial features and limbs on the 3D model are determined according to the appearance characteristics of the real object, which can make the anthropomorphic
  • the presentation of the pattern on the 3D model is more accurate and vivid, and at the same time, according to the calculation method of the facial features, body position and size of the anthropomorphic animation, a better anthropomorphic virtual image is obtained in subjective experience.
  • the anthropomorphic pattern is rendered to the corresponding 3D model of the object by projection, which avoids the problem of poor effect when the anthropomorphic pattern is rendered on the 3D model due to dangling or embedded anthropomorphic patterns, and quickly realizes the anthropomorphism of the object
  • the AR effect reduces the complexity of facial expressions and body motion design, and can make the presentation of anthropomorphic patterns on 3D models more accurate and vivid.
  • the processing unit 310 is specifically configured to determine the distance between the anthropomorphic pattern and the 3D model according to the size of the anthropomorphic pattern when rendering the anthropomorphic pattern And the distance between the anthropomorphic pattern and the virtual projection point, so that the projection surface size of the anthropomorphic pattern on the projection surface of the 3D model is the same as the projection size determined according to the shape feature of the target object.
  • the projection size S of the anthropomorphic pattern on the projection surface of the 3D model satisfies the following conditions:
  • the size of the anthropomorphic pattern is W 1
  • the distance between the anthropomorphic pattern and the projection surface of the 3D model is X 2
  • the projection surface of the 3D model is the hexahedral bounding box of the 3D model and the anthropomorphic pattern.
  • the surface of is parallel to the surface, and the distance between the anthropomorphic pattern and the virtual projection point is X 1 .
  • the anthropomorphic pattern may be virtual facial features or virtual limbs.
  • the personification pattern includes:
  • Virtual facial features, and/or, virtual limbs are Virtual facial features, and/or, virtual limbs.
  • the number of the virtual limbs may also be one or more.
  • the processing unit 310 is specifically configured to:
  • the proportional relationship of the virtual facial features and/or the position and projection size of the virtual limbs on the 3D model are determined.
  • the proportional relationship of the virtual facial features includes: the proportional relationship between the distance between the eyes and the top of the head and the length of the human head, the proportional relationship between the distance between the mouth and the top of the head and the length of the human head, the distance between the eyes and the width of the human head At least one of the proportional relationships between,
  • the proportional relationship of the virtual limbs includes: the proportional relationship between the distance from the shoulder to the top of the head and the height, the proportional relationship between the distance from the leg to the top of the head and the height, the proportional relationship between the length of the upper limbs and the height, and the length of the lower limbs At least one of the proportional relationships with height.
  • the processing unit 310 is further configured to determine the position of the virtual decoration on the 3D model on the 3D model according to the position of the virtual facial features and/or the virtual limbs on the 3D model. Position on the 3D model.
  • the virtual decoration is manually selected or automatically selected from the virtual decoration resource, and the virtual decoration resource includes a plurality of virtual decorations.
  • the virtual decoration resources may include virtual facial features, decorations on virtual limbs, etc.
  • the decorations may be hats, scarves, shoes, clothes, or other decorations, for example.
  • the virtual hat is located above the virtual eyes
  • the virtual scarf is located below the virtual head, and so on.
  • the processing unit 310 is specifically configured to: according to the target object, manually select or automatically select the anthropomorphic pattern from pre-stored anthropomorphic pattern resources, and the anthropomorphic pattern resource includes Multiple anthropomorphic patterns.
  • the processing unit 310 is specifically configured to: call the 3D model locally, call the 3D model externally, or generate the 3D model.
  • the format of the personification pattern includes at least one of a personification picture or a personification image exchange format GIF animation.
  • the processing unit is further configured to recognize and/or locate the object using a camera device.
  • a camera can be used to take pictures and scan the object to realize the recognition and/or positioning of the object.
  • the device 300 may also include a storage unit, which is used to store anthropomorphic pattern resources, virtual decoration resources, and the like.
  • the storage unit is also used to store instructions executed by the processing unit 310 and the display unit 320.
  • the processing unit (module) 310, the display unit (module) 320, and the storage unit are coupled with each other.
  • the storage unit (module) stores instructions.
  • the processing unit 310 is used to execute the instructions stored in the storage unit.
  • the display unit 320 is used to drive the processing unit 310. Execute the display function.
  • storing the anthropomorphic pattern resource and the virtual decoration resource can also be stored in a server in the cloud, and the device 300 can obtain and store the anthropomorphic pattern resource and the virtual decoration resource from the server.
  • the device 300 may further include an object recognition and location service module.
  • the object recognition and location service module may be a camera module or the like.
  • the object recognition and positioning service module is used to identify and locate real objects, and output the 6DoF posture of the real objects.
  • the device 300 may not include an object recognition and location service module.
  • Object recognition and location service modules can also be deployed on cloud services.
  • the communication device 300 shown in FIG. 17 may be a terminal device (for example, a mobile phone), a VR device (for example, VR glasses), an AR device, a near-eye display device, a user device with a 3D animation function, and the like.
  • a terminal device, a VR device, an AR device, a near-eye display device, and a user device capable of displaying 3D animation include the communication device 300 shown in FIG. 18.
  • the device 300 may also include a camera device, for example, a camera or the like.
  • a camera device for example, a camera or the like.
  • the processing unit 310 may be implemented by a processor
  • the storage unit may be implemented by a memory
  • the display unit 320 may be implemented by a display.
  • the device 400 for generating an anthropomorphic 3D model may include The processor 410, the memory 420, and the display 430.
  • the device 400 may also include a camera device and a display device, for example, it may be a camera and a display.
  • the device 400 may further include an object recognition and location service module.
  • the object recognition and location service module may be a camera module or the like.
  • each unit in the device can be all implemented in the form of software called by processing elements; they can also be all implemented in the form of hardware; part of the units can also be implemented in the form of software called by the processing elements, and some of the units can be implemented in the form of hardware.
  • each unit can be a separate processing element, or it can be integrated in a certain chip of the device for implementation.
  • it can also be stored in the memory in the form of a program, which is called and executed by a certain processing element of the device. Function.
  • the processing element may also be called a processor, and may be an integrated circuit with signal processing capability.
  • each step of the above method or each of the above units may be implemented by an integrated logic circuit of hardware in a processor element or implemented in a form of being called by software through a processing element.
  • the unit or module in any of the above devices may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (ASICs), or, One or more digital signal processors (DSP), or one or more field programmable gate arrays (FPGA), or a combination of at least two of these integrated circuits.
  • ASICs application specific integrated circuits
  • DSP digital signal processors
  • FPGA field programmable gate arrays
  • the unit in the device can be implemented in the form of a processing element scheduler
  • the processing element can be a general-purpose processor, such as a central processing unit (CPU) or other processors that can call programs.
  • CPU central processing unit
  • these units can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • FIG. 20 is a schematic block diagram of the structure of a terminal device provided by this application.
  • the terminal device as a mobile phone as an example
  • FIG. 20 shows a block diagram of a part of the structure of a mobile phone 400 related to an embodiment of the present application.
  • the mobile phone 400 includes: a radio frequency (RF) circuit 410, a power supply 420, a processor 430, a memory 440, an input unit 450, a display unit 460, a camera 170, an audio circuit 480, and a wireless fidelity (wireless fidelity). , WiFi) module 490 and other components.
  • RF radio frequency
  • the structure of the mobile phone shown in FIG. 20 does not constitute a limitation on the mobile phone, and may include more or fewer components than those shown in the figure, or combine some components, or arrange different components.
  • the components of the mobile phone 400 are specifically introduced below in conjunction with FIG. 20:
  • the RF circuit 410 can be used for receiving and sending signals during the process of sending and receiving information or talking. In particular, after receiving the downlink information of the base station, it is processed by the processor 130; in addition, the designed uplink data is sent to the base station.
  • the RF circuit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • the RF circuit 410 can also communicate with the network and other devices through wireless communication. The wireless communication can use any communication standard or protocol.
  • the mobile phone 400 can communicate with a server in the cloud through the RF circuit 410 to obtain anthropomorphic pattern resources and virtual decoration resources.
  • the memory 440 may be used to store software programs and modules.
  • the processor 430 executes various functional applications and data processing of the mobile phone 400 by running the software programs and modules stored in the memory 440.
  • the memory 440 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, phone book, etc.) created by the use of the mobile phone 400, etc.
  • the memory 440 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the memory 440 may store anthropomorphic pattern resources, virtual decoration resources, and the like.
  • the input unit 450 may be used to receive inputted digital or character information, and generate key signal input related to user settings and function control of the mobile phone 400.
  • the input unit 450 may include a touch panel 451 and other input devices 452.
  • the touch panel 451 also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 451 or near the touch panel 451. Operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 451 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 430, and can receive and execute the commands sent by the processor 130.
  • the touch panel 451 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the input unit 150 may also include other input devices 452.
  • other input devices 452 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, joystick, and the like.
  • function keys such as volume control buttons, switch buttons, etc.
  • trackball mouse
  • joystick and the like.
  • the user can select the five sense patterns to be used from the available five sense patterns through the input unit 450.
  • the display unit 460 may be used to display information input by the user or information provided to the user and various menus of the mobile phone 400.
  • the display unit 460 may include a display panel 461.
  • the display panel 461 may be configured in the form of LCD, OLED, or the like.
  • the touch panel 451 can cover the display panel 461. When the touch panel 451 detects a touch operation on or near it, it transmits it to the processor 430 to determine the type of the touch event, and then the processor 430 responds to the touch event. The type provides corresponding visual output on the display panel 461.
  • the touch panel 451 and the display panel 451 are used as two independent components to implement the input and input functions of the mobile phone 400, but in some embodiments, the touch panel 451 and the display panel 461 may be integrated
  • the input and output functions of the mobile phone 400 are realized.
  • the display unit 460 may display the anthropomorphic pattern finally generated on the 3D model to the user.
  • the mobile phone 100 may also include a camera 170, which is used to obtain images or time-frequency resources.
  • the camera 170 can complete the recognition and positioning of real objects.
  • the audio circuit 480, the speaker 481, and the microphone 482 can provide an audio interface between the user and the mobile phone 400.
  • the audio circuit 480 can transmit the electric signal after the conversion of the received audio data to the speaker 481, and the speaker 481 converts it into a sound signal for output; on the other hand, the microphone 482 converts the collected sound signal into an electric signal, and the audio circuit 480 converts the collected sound signal into an electric signal.
  • the audio data is converted into audio data, and then the audio data is output to the RF circuit 410 to be sent to, for example, another mobile phone, or the audio data is output to the memory 440 for further processing.
  • the user can realize the interaction with the 3D anthropomorphic animation through the audio circuit.
  • the virtual facial features can realize lip synchronization and expression drive based on voice, and the virtual limbs can also be driven by voice.
  • WiFi is a short-distance wireless transmission technology.
  • the mobile phone 400 can help users send and receive emails, browse webpages, and access streaming media. It provides users with wireless broadband Internet access.
  • FIG. 19 shows the WiFi module 490, it is understandable that it is not a necessary component of the mobile phone 400 and can be omitted as needed without changing the essence of the invention.
  • the processor 430 is the control center of the mobile phone 400, which uses various interfaces and lines to connect various parts of the entire mobile phone, and by running or executing software programs and/or modules stored in the memory 440, and calling data stored in the memory 440, Perform various functions of the mobile phone 400 and process data, thereby realizing multiple services based on the mobile phone.
  • the processor 430 may include one or more processing units; preferably, the processor 430 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 430.
  • the processor 430 can complete the calculation of the size and position of the anthropomorphic pattern on the projection surface of the 3D model.
  • the mobile phone 400 also includes a power source 420 (such as a battery) for supplying power to various components.
  • a power source 420 such as a battery
  • the power source can be logically connected to the processor 430 through a power management system, so that functions such as charging, discharging, and power consumption can be managed through the power management system.
  • the mobile phone 400 may also include a sensor, a Bluetooth module, etc., which will not be repeated here.
  • FIG. 20 is only a possible structure of the mobile phone provided in this application, and the mobile phone that can execute the method for generating the personified 3D model provided in this application may also have other structures, and the embodiments of the application are not limited herein.
  • the processor may be a central processing unit (central processing unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (digital signal processors, DSP), and dedicated integration Circuit (application specific integrated circuit, ASIC), ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory in the embodiments of the present application may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • static random access memory static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • Access memory synchronous DRAM, SDRAM
  • double data rate synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous connection dynamic random access memory Take memory (synchlink DRAM, SLDRAM) and direct memory bus random access memory (direct rambus RAM, DR RAM).
  • An embodiment of the present application also provides a communication system, which includes: the above-mentioned device for generating an anthropomorphic 3D model and the above-mentioned server.
  • the embodiment of the present application also provides a computer-readable medium for storing computer program code, and the computer program includes instructions for executing the method 200 for generating a personified 3D model in the embodiment of the present application.
  • the readable medium may be ROM or RAM, which is not limited in the embodiment of the present application.
  • the computer program product includes instructions.
  • the device for generating the anthropomorphic 3D model executes the operation corresponding to the above method.
  • An embodiment of the present application also provides a system chip, which includes a processing unit and a communication unit.
  • the processing unit may be, for example, a processor, and the communication unit may be, for example, an input/output interface, a pin, or a circuit.
  • the processing unit can execute computer instructions so that the chip in the communication device executes any of the above-mentioned methods for generating anthropomorphic 3D models provided in the embodiments of the present application.
  • any communication device provided in the foregoing embodiments of the present application may include the system chip.
  • the computer instructions are stored in a storage unit.
  • the storage unit is a storage unit in the chip, such as a register, a cache, etc.
  • the storage unit can also be a storage unit in the terminal located outside the chip, such as a ROM or other storage units that can store static information and instructions.
  • static storage devices RAM, etc.
  • the processor mentioned in any one of the above may be a CPU, a microprocessor, an ASIC, or one or more integrated circuits used to control the program execution of the method for generating the anthropomorphic 3D model described above.
  • the processing unit and the storage unit can be decoupled, respectively set on different physical devices, and connected in a wired or wireless manner to realize the respective functions of the processing unit and the storage unit, so as to support the system chip to implement the above-mentioned embodiments Various functions in.
  • the processing unit and the memory may also be coupled to the same device.
  • system and "network” in this article are often used interchangeably in this article.
  • and/or in this article is only an association relationship describing the associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, exist alone B these three situations.
  • the character "/" in this text generally indicates that the associated objects before and after are in an "or” relationship.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种拟人化3D模型生成的方法和装置,该方法包括:获取目标物体的3D模型(S210);获取该目标物体的拟人化图案(S220);根据该目标物体的外形特征确定该拟人化图案在该3D模型上的位置和投影大小(S230);根据该拟人化图案在该3D模型上的位置和投影大小,将该拟人化图案渲染到该3D模型上,生成拟人化3D模型(S240)。本方法可以使得拟人化图案在3D模型上的呈现更加准确生动。通过投影的方式将拟人化图案渲染到物体对应的3D模型上,避免了出现悬空或者嵌入拟人化图案导致拟人化图案在3D模型上生成的效果较差的问题,降低五官表情以及肢体的动作设计复杂度。

Description

拟人化3D模型生成的方法和装置
本申请要求于2020年03月20日提交中国专利局、申请号为202010201611.8、申请名称为“拟人化3D模型生成的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及虚拟现实领域,更为具体的,涉及一种拟人化3D模型生成的方法和装置。
背景技术
现有三维(three dimensions,3D)动画电影有一种物体拟人化形态,赋予物体生命,能够使得冷冰冰的物体能够表现得像人一样行动、思考及发生表情变化,给观众带来从物体视角的完全不同的感受。
目前,在现实物体上制作拟人化动画一般采用3D模型制作的方式,通过美工人员提前对现实物体进行3D建模,然后在3D模型上制作对应的拟人化虚拟形象(例如虚拟五官、虚拟肢体模型),然后再做纹理贴图,形成动画。但是针对五官表情以及肢体的动作设计复杂,即便是简单的动画也需要较长时间专业的调整,拟人化虚拟形象在3D模型上呈现的效果不好,用户主观上对3D模型上拟人化虚拟形象的体验不好,效率和质量都比较低,并且,针对不同的物体,对应的3D模型需要重新创造,进一步的,还需要对新的3D模型重新制作五官、肢体动作进行调整,比较复杂,适应性较差。
发明内容
本申请提供一种拟人化3D模型生成的方法和装置,可以快速实现物体拟人化的增强现实效果,降低五官表情以及肢体的动作设计复杂度。并且可以获得主观体验上较好的拟人化虚拟形象,增强3D模型上的拟人化形象的效果。
第一方面,提供了一种拟人化3D模型生成的方法,该方法的执行主体可以为可以显示虚拟图像或者动画的终端设备,例如,手机、AR设备、个人数字处理设备、VR设备等,也可以是应用于该终端设备上的芯片。该方法包括:获取目标物体的3D模型;获取该目标物体的拟人化图案;根据该目标物体的外形特征确定该拟人化图案在该3D模型上的位置和投影大小;根据该拟人化图案在该3D模型上的位置和投影大小,将该拟人化图案渲染到该3D模型上,生成拟人化3D模型。其中,物体的外形特征可以理解为物体图像中像素的数量,根据该物体图像中像素的数量,可以确定物体的长、宽、高,或者,可以确定长、宽、高中任意两个参数或者三个参数的比例。或者,物体的外形特征也可以理解为物体实际的长、宽、高中任意两个参数或者三个参数的比例。
第一方面提供的拟人化3D模型生成的方法,拟人化图案在3D模型上的位置以及拟人化的五官以及四肢等在3D模型上的投影大小是根据现实物体的外形特征确定的,可以 使得拟人化图案在3D模型上的呈现更加准确生动。并且,通过投影的方式将拟人化图案渲染到物体对应的3D模型上,避免了出现悬空或者嵌入拟人化图案导致拟人化图案在3D模型上渲染时效果较差的问题,快速实现物体拟人化的AR效果,降低五官表情以及肢体的动作设计复杂度。
根据第一方面,在第一方面的第一种可能的实现方式中,根据该拟人化图案在该3D模型上的位置和该拟人化图案在该3D模型的投影面上的投影大小,将该拟人化图案渲染到该3D模型上,具体包括:在渲染该拟人化图案时,根据该拟人化图案的大小,确定该拟人化图案与该3D模型之间的距离、以及该拟人化图案与虚拟投影点之间的距离,以使得该拟人化图案在该3D模型的投影面上的投影面大小和根据该目标物体的外形特征确定的投影大小相同。在该实现方式中,通过确定(调整)拟人化图案与3D模型之间的距离、以及拟人化图案与虚拟投影点之间的距离,实现调整3D模型上的拟人化图案的大小,可以快速准确的将拟人化图像按照确定的大小投影到3D模型上,提高渲染效率,并且,使得拟人化图案在3D模型上的呈现更加准确生动。示例性的,该拟人化图案可以为虚拟五官或者虚拟四肢。
根据第一方面和第一方面的第一种可能的实现方式,在第一方面的第二种可能的实现方式中,该拟人化图案在该3D模型的投影面上的投影大小S满足如下条件:
Figure PCTCN2021070703-appb-000001
其中,该拟人化图案的大小为W 1,该拟人化图案与该3D模型的投影面之间的距离为X 2,该3D模型的投影面为该3D模型六面体包围盒与该拟人化图案所在的面平行的面,该拟人化图案与该虚拟投影点之间的距离为X 1。在该实现方式中,通过上述的公式调整拟人化图案在3D模型上的投影大小,可以提高调整拟人化图案在3D模型上的投影大小的准确性以及效率,便于实现。
根据第一方面、以及第一方面的第一种至第二种可能的实现方式,在第一方面的第三种可能的实现方式中,该拟人化图案包括:
虚拟五官、和/或,虚拟四肢。
根据第一方面的第三种可能的实现方式,在第一方面的第四种可能的实现方式中,该虚拟五官的个数可以为一个或者多个、该虚拟四肢的个数也可以为一个或者多个。
根据第一方面、以及第一方面的第一种至第四种可能的实现方式,在第一方面的第五种可能的实现方式中,该拟人化图案包括虚拟五官和/或虚拟四肢,根据该目标物体的外形特征确定该拟人化图案在该3D模型上的位置和投影大小,具体包括:
根据该目标物体的外形特征,确定该虚拟五官的比例关系和/或该虚拟四肢的比例关系;
根据该目标物体的外形特征,以及该虚拟五官的比例关系和/或该虚拟四肢的比例关系,确定该虚拟五官和/或该虚拟四肢在该3D模型上的位置和投影大小;
其中,该虚拟五官的比例关系包括:眼睛与头顶之间的距离与人头长度之间的比例关系、嘴巴与头顶之间的距离与人头长度之间的比例关系、双眼之间的距离与人头宽度之间的比例关系中的至少一种,
该虚拟四肢的比例关系包括:肩部到头顶的距离与身高之间的比例关系、腿部到头顶 的距离与身高之间的比例关系、上肢的长度与身高之间的比例关系、下肢的长度与身高之间的比例关系中的至少一种。
在该实现方式中,通过目标物体的外形特征,确定与之匹配的现实中普通人的虚拟五官的比例关系和/或该虚拟四肢的比例关系,然后确定虚拟五官以及虚拟四肢在该3D模型上的位置,可以获得主观体验上较好的拟人化虚拟形象,并且可以使得拟人化图案在3D模型上的呈现更加准确。
根据第一方面、以及第一方面的第一种至第五种可能的实现方式,在第一方面的第六种可能的实现方式中,该方法还包括:
根据该虚拟五官和/或该虚拟四肢在该3D模型上的位置,确定该3D模型上的虚拟装饰物在该3D模型上的位置。可选的,该虚拟装饰物是在虚拟装饰物资源中人工选取或者自动选取的,该虚拟装饰物资源包括多个虚拟装饰物。示例性的,虚拟装饰物资源可以包括虚拟五官、虚拟四肢上的装饰物等。虚拟五官、虚拟四肢上的装饰物例如可以是帽子、围巾、鞋子、衣服或者其他装饰品等。例如,虚拟帽子位于虚拟眼睛上方,虚拟围巾位于虚拟头部的下方等。
根据第一方面、以及第一方面的第六种可能的实现方式,在第一方面的第七种可能的实现方式中,虚拟装饰物资源是预先存储的。
根据第一方面、以及第一方面的第一种至第七种可能的实现方式,在第一方面的第八种可能的实现方式中,获取该目标物体的3D模型,具体包括:本地调用该3D模型、外部调用该3D模型、或者生成该3D模型中的任意一种。
根据第一方面、以及第一方面的第一种至第八种可能的实现方式,在第一方面的第九种可能的实现方式中,获取该目标物体的拟人化图案,具体包括:根据该目标物体,从预存的拟人化图案资源中人工选取或者自动选取该拟人化图案,该拟人化图案资源包括多个拟人化图案。在该实现方式中,3D模型以及该3D模型上的拟人化图案是解耦的,针对不同的3D模型,不需要重新创建虚拟五官以及虚拟四肢等。可以降低设计成本和复杂度。
根据第一方面、以及第一方面的第一种至第九种可能的实现方式,在第一方面的第十种可能的实现方式中,该拟人化图案的格式包括拟人化图片或者拟人化图像交换格式GIF动画中的至少一种。
根据第一方面、以及第一方面的第一种至第十种可能的实现方式,在第一方面的第十一种可能的实现方式中,该方法还包括:利用摄像装置识别和/或定位该目标物体。例如,利用摄像头拍照、扫描该物体,实现识别和/或定位该目标物体。
第二方面,提供了一种拟人化3D模型生成的装置,该装置包括:
处理单元,用于获取目标物体的3D模型;
该处理单元,还用于获取该目标物体的拟人化图案;
该处理单元,还用于根据该目标物体的外形特征确定该拟人化图案在该3D模型上的位置和投影大小;其中,物体的外形特征可以理解为物体图像中像素的数量,根据该物体图像中像素的数量,可以确定物体的长、宽、高,或者,可以确定长、宽、高中任意两个参数或者三个参数的比例。或者,物体的外形特征也可以理解为物体实际的长、宽、高中任意两个参数或者三个参数的比例。
该处理单元,还用于根据该拟人化图案在该3D模型上的位置和投影大小,将该拟人 化图案渲染到该3D模型上,生成拟人化3D模型。
第二方面提供的拟人化3D模型生成的装置,拟人化图案在3D模型上的位置以及拟人化的五官以及四肢等在3D模型上的投影大小是根据现实物体的外形特征确定的,可以使得拟人化图案在3D模型上的呈现更加准确生动。并且,通过投影的方式将拟人化图案渲染到物体对应的3D模型上,避免了出现悬空或者嵌入拟人化图案导致拟人化图案在3D模型上渲染的效果较差的问题,快速实现物体拟人化的AR效果,降低五官表情以及肢体的动作设计复杂度。
根据第二方面,在第二方面的第一种可能的实现方式中,该处理单元具体用于:在渲染该拟人化图案时,根据该拟人化图案的大小,确定该拟人化图案与该3D模型之间的距离、以及该拟人化图案与虚拟投影点之间的距离,以使得该拟人化图案在该3D模型的投影面上的投影面大小和根据该目标物体的外形特征确定的投影大小相同。示例性的,该拟人化图案可以为虚拟五官或者虚拟四肢。在该实现方式中,通过调整拟人化图案与3D模型之间的距离、以及拟人化图案与虚拟投影点之间的距离,实现调整3D模型上的拟人化图案的大小,可以快速准确的将拟人化图像按照确定的大小投影到3D模型上,提高渲染效率,并且,使得拟人化图案在3D模型上的呈现更加准确生动。
根据第二方面和第二方面的第一种可能的实现方式,在第二方面的第二种可能的实现方式中,该拟人化图案在该3D模型上的投影面上的投影大小S满足如下条件:
Figure PCTCN2021070703-appb-000002
其中,该拟人化图案的大小为W 1,该拟人化图案与该3D模型的投影面之间的距离为X 2,该3D模型的投影面为该3D模型六面体包围盒与该拟人化图案所在的面平行的面,该拟人化图案与该虚拟投影点之间的距离为X 1。在该实现方式中,可以提高调整拟人化图案在3D模型上的投影大小的准确性以及效率,便于实现。
根据第二方面、以及第二方面的第一种至第二种可能的实现方式,在第二方面的第三种可能的实现方式中,该拟人化图案包括:虚拟五官,和/或,虚拟四肢。
根据第二方面的第三种可能的实现方式,在第二方面的第四种可能的实现方式中,该虚拟五官的个数可以为一个或者多个、该虚拟四肢的个数也可以为一个或者多个。
根据第二方面、以及第二方面的第一种至第四种可能的实现方式,在第二方面的第五种可能的实现方式中,该处理单元具体用于:
根据该目标物体的外形特征,确定该虚拟五官和/或该虚拟四肢的比例关系;
根据该目标物体的外形特征,以及该虚拟五官的比例关系和/或该虚拟四肢的比例关系,确定该虚拟五官的比例关系和/或该虚拟四肢在该3D模型上的位置和投影大小;
其中,该虚拟五官的比例关系包括:眼睛与头顶之间的距离与人头长度之间的比例关系、嘴巴与头顶之间的距离与人头长度之间的比例关系、双眼之间的距离与人头宽度之间的比例关系中的至少一种,
该虚拟四肢的比例关系包括:肩部到头顶的距离与身高之间的比例关系、腿部到头顶的距离与身高之间的比例关系、上肢的长度与身高之间的比例关系、下肢的长度与身高之间的比例关系中的至少一种。
在该实现方式中,根据目标物体的外形特征,确定与之匹配的现实中普通人的虚拟五 官的比例关系和/或该虚拟四肢的比例关系,然后确定虚拟五官以及虚拟四肢在该3D模型上的位置,可以获得主观体验上较好的拟人化虚拟形象,并且可以使得拟人化图案在3D模型上的呈现更加准确。
根据第二方面、以及第二方面的第一种至第五种可能的实现方式,在第二方面的第六种可能的实现方式中,该处理单元,还用于根据该虚拟五官和/或该虚拟四肢在该3D模型上的位置,确定该3D模型上的虚拟装饰物在该3D模型上的位置。可选的,该虚拟装饰物是在虚拟装饰物资源中人工选取或者自动选取的,该虚拟装饰物资源包括多个虚拟装饰物。示例性的,虚拟装饰物资源可以包括虚拟五官、虚拟四肢上的装饰物等,装饰物例如可以是帽子、围巾、鞋子、衣服或者其他装饰品等。例如,虚拟帽子位于虚拟眼睛上方,虚拟围巾位于虚拟头部的下方等。
根据第二方面、以及第二方面的第一种至第六种可能的实现方式,在第二方面的第七种可能的实现方式中,该虚拟装饰物资源是预先存储的。
根据第二方面、以及第二方面的第一种至第七种可能的实现方式,在第二方面的第八种可能的实现方式中,该处理单元具体用于:本地调用该3D模型、外部调用该3D模型、或者生成该3D模型。
根据第二方面、以及第二方面的第一种至第八种可能的实现方式,在第二方面的第九种可能的实现方式中,该处理单元具体用于:根据该目标物体,从预存的拟人化图案资源中人工选取或者自动选取该拟人化图案,该拟人化图案资源包括多个拟人化图案。在该实现方式中,3D模型以及该3D模型上的拟人化图案是解耦的,针对不同的3D模型,不需要重新创建虚拟的五官以及四肢等。可以降低设计成本和复杂度。
根据第二方面、以及第二方面的第一种至第八种可能的实现方式,在第二方面的第九种可能的实现方式中,该拟人化图案的格式包括拟人化图片或者拟人化图像交换格式GIF动画中的至少一种。
根据第二方面、以及第二方面的第一种至第九种可能的实现方式,在第二方面的第十种可能的实现方式中,该处理单元还用于:还用于利用摄像装置识别和/或定位该目标物体。例如,利用摄像头拍照、扫描该物体,实现识别和/或定位该目标物体。
根据第二方面、以及第二方面的第一种至第十种可能的实现方式,在第二方面的第十一种可能的实现方式中,该装置还可以包括物体识别与定位服务模块,例如,该物体识别与定位服务模块可以为摄像模块等。物体识别与定位服务模块用于对物体(现实物体)进行识别与定位的服务,输出现实物体的6个自由度的姿态。
根据第二方面、以及第二方面的第一种至第十一种可能的实现方式,在第二方面的第十二种可能的实现方式中,该装置还可以包括摄像装置,例如,摄像装置可以是摄像头等。
根据第二方面、以及第二方面的第一种至第十二种可能的实现方式,在第二方面的第十三种可能的实现方式中,该装置为终端设备(例如手机等),或者为其他的VR设备(例如VR眼镜等),AR设备、或者,该装置也可以是可穿戴设备、个人数字处理PDA等其他可以显示虚拟图像或者动画的设备。
第三方面,提供了一种通信装置,该装置包括用于执行以上第一方面或第一方面的任意可能的实现方式中的各个步骤的单元。
第四方面,提供了一种通信装置,该装置包括至少一个处理器和存储器,该至少一个 处理器用于执行以上第一方面或第一方面的任意可能的实现方式中的方法。
第五方面,提供了一种通信装置,该装置包括至少一个处理器和接口电路,该至少一个处理器用于执行以上第一方面或第一方面的任意可能的实现方式中的方法。
第六方面,提供了一种终端设备,该终端设备包括上述第二方面提供的通信装置,或者,该终端设备包括上述第三方面提供的通信装置,或者,该终端设备包括上述第四方面提供的通信装置。
第七方面,提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序在被处理器执行时,用于执行第一方面或第一方面的任意可能的实现方式中的方法。
第八方面,提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当该计算机程序被执行时,用于执行第一方面或第一方面的任意可能的实现方式中的方法。
第九方面,提供了一种芯片,该芯片包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有该芯片的通信设备执行第一方面或第一方面的任意可能的实现方式中的方法。
本申请提供的拟人化3D模型生成的方法和装置,3D模型以及该3D模型上的拟人化图案是解耦的,针对不同的3D模型,不需要重新创建虚拟的五官以及四肢等。拟人化图案在3D模型上的位置以及拟人化的五官以及四肢等在3D模型上的投影大小是根据现实物体的外形特征确定的,可以使得拟人化图案在3D模型上的呈现更加准确生动。并且,通过投影的方式将拟人化图案渲染到物体对应的3D模型上,避免了出现悬空或者嵌入拟人化图案导致拟人化图案在3D模型上渲染时效果较差的问题,快速实现物体拟人化的AR效果,降低五官表情以及肢体的动作设计复杂度。
附图说明
图1是美工人员绘制的物体的形状的示意图。
图2是美工人员最终在3D模型上制作成的动画的示意图。
图3是本申请的一例应用场景的示意图。
图4是本申请的另一例应用场景的示意图。
图5是本申请实施例提供的一例拟人化3D模型生成的方法的示意性流程图。
图6是本申请实例中的一例六面体包围盒的示意图。
图7是本申请实施例提供的另一例拟人化3D模型生成的方法的示意性流程图。
图8是本申请实施例一例将拟人化图案进行投影时的俯视示意图。
图9是本申请实施例中的一例将虚拟五官图像投影到3D模型效果的示意图。
图10是本申请实施例中的一例眼睛和嘴巴在3D模型上的位置的示意图。
图11是本申请实施例中的一例肩部和腿部在3D模型上的位置的示意图。
图12是本申请实施例中的一例的虚拟五官投影后的效果的示意图。
图13是本申请实施例中的一例用户选择虚拟五官表情的示意图。
图14是本申请实施例中的另一例的虚拟五官投影后的效果的示意图。
图15是本申请实施例中的一例用户选择虚拟四肢的示意图。
图16是本申请实施例中的另一例的虚拟五官以及虚拟装饰投影后的效果的示意图。
图17是本申请实施例中的一例用户选择虚拟装饰的示意图。
图18是本申请实施例提供的通信装置的示意性框图。
图19是本申请实施例提供的另一例通信装置的示意性框图。
图20是本申请实施例提供的终端设备的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
现有3D动画电影有一种物体拟人化形态,赋予物体生命,能够使得冷冰冰的物体能够表现得像人一样行动、思考及发生表情变化,给观众带来从物体视角的完全不同的感受,如《汽车总动员》中的汽车,《美女与野兽》中的家具与餐具,独特的拟人化效果带来感同身受的效果,帮助这些电影获得巨大收益。
目前,在物体上制作拟人化动画一般采用3D模型制作的方式,通过美工人员提前对现实物体进行3D建模,然后在3D模型上制作对应的五官、肢体模型然后再做纹理贴图,形成动画。
例如,美工人员首先通过绘画,绘制出物体的形状以及初步效果。例如,图1所示的为美工人员绘制的物体的形状的示意图。然后,美工人员根据原画,通过建模软件,例如如Maya、3Dmax或者Blender等,把物体的3D模型创建出来,然后设计纹理贴图,贴到3D模型上。拟人物体需要的五官和四肢,也是在整个3D建模中直接创建出来的。最后,对3D模型中的五官添加骨骼,设计动作、调整蒙皮、修正权重等,最终形成动画。例如,图2所示的为最终形成的动画的示意图。
利用上述的方法,每生成一个3D模型,就需要在该模型上绘画或者创建虚拟的五官、四肢。不用的美工人员利用各自的方法或者喜好确定虚拟的五官和四肢在3D模型的上位置以及大小,造成针对五官表情以及肢体的动作设计复杂,即便是简单的动画也需要较长时间专业的调整。并且,针物不同物体,对应的3D模型需要重新创造,进一步的还需要对新的3D模型重新制作五官、肢体动作进行调整,适应性较差。
有鉴于此,本申请提供的一种拟人化3D模型生成的方法,3D模型以及该3D模型上的拟人化图案是解耦的,即针对不同的3D模型,不需要重新创建虚拟的五官以及四肢等。拟人化图案在3D模型上的位置以及拟人化的五官以及四肢等在3D模型上的投影大小是根据3D物体的实际尺寸确定的,可以获得主观体验上较好的拟人化虚拟形象。并且,通过投影的方式将拟人化图案投影到物体对应的3D模型上,避免了出现悬空或者嵌入拟人化图案导致拟人化图案在3D模型上生成的效果较差的问题,快速实现物体拟人化的增强现实(augmented reality,AR)效果,降低五官表情以及肢体的动作设计复杂度。
下面简单介绍本申请提供的方案的应用场景。
图3为本申请的一例应用场景的示意图。如图3所示的,在通过终端设备(例如手机)对现实物体进行识别后,在现实物体上叠加拟人化的图案(例如虚拟五官以及四肢等)。虚拟五官可以根据语音或文本实现唇音同步及表情驱动,虚拟四肢也可以能通过语音、文本理解后进行驱动。将现实物体与虚拟五官以及四肢完美结合起来,多个虚拟形象(也可以称为拟人化图案)之间、虚拟形象与其它现实物体之间都可以形成互动,提升应用可 玩性。
图4为本申请的一例应用场景的示意图。如图4所示的,在通过虚拟现实(virtual reality,VR)设备(例如VR眼镜等)对现实物体进行识别后,在现实物体上叠加拟人化的图案(例如虚拟五官以及四肢等)。将现实物体与虚拟五官以及四肢完美结合起来,多个虚拟形象(也可以称为拟人化图案)之间、虚拟形象与其它现实物体之间都可以形成互动,提升应用可玩性。
应该理解,图3和图4所示的例子不应该对本申请实施例应用的场景造成任何的限制。例如,本申请还可以应用在其他的场景中,例如3D动画制作的过程。
下面结合图5对本申请提供的拟人化3D模型生成的方法进行说明。应理解,本申请提供的拟人化3D模型生成的方法的执行主体可以为终端设备(例如手机等),也可以为其他的VR设备(例如VR眼镜等),AR设备、或者,也可以是可穿戴设备、个人数字处理(Personal Digital Assistant,PDA)等其他可以显示虚拟图像或者动画的设备等。本申请实施例在此不作限制。
本申请实施例中的终端设备可以指具有显示3D动画功能的用户设备、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。或者,还可以是有显示3D动画功能蜂窝电话、无绳电话、会话启动协议(Session Initiation Protocol,SIP)电话、无线本地环路(Wireless Local Loop,WLL)站、具有无线通信功能的手持设备、计算设备、车载设备、可穿戴设备、5G网络中的终端设备未来的其他通信系统中的终端设备等,本申请实施例对此并不限定。
如图5所示,图5中示出的拟人化3D模型生成的方法200可以包括步骤S210至步骤S240。下面结合图5详细说明方法200中的各个步骤。
S210,获取目标物体对应的3D模型。
S220,获取该目标物体的拟人化图案。
S230,根据该目标物体的外形特征确定该拟人化图案在该3D模型上的位置和投影大小。
S240,根据该拟人化图案在该3D模型上的位置和投影大小,将该拟人化图案渲染到该3D模型上,生成拟人化3D模型。
在S210中,终端设备(例如,手机、VR设备或者AR设备等)可以首先获取目标物体(下文的描述中简称为“物体”)的3D模型。这里的物体可以是现实中存在的现实物体,例如饮料瓶、花瓶、桌子等任何现实物体。
例如,在S210中,终端设备可以通过本地调用、外部调用或者自己生成的方式等获取体的3D模型。本地调用可以理解为不同物体对应的3D模型已经存储在终端设备上,终端设备可以根据物体的类型、尺寸等从已经存储的多个3D模型中选择与物体对应的3D模型。外部调用可以理解为:不同物体对应的3D模型已经存储在外部设备上(例如服务器上或者其他的设备上),终端设备可以根据物体的类型、尺寸等从外部设备存储的多个3D模型中选择与物体对应的3D模型,并获取所选择的3D模型。终端设备自己生成3D模型可以理解为终端设备根据物体的类型、尺寸等,利用建模软件,例如Maya、3Dmax或者Blender等,生成现实物体对应的3D模型,在本申请实施例中,根据物体的类型、尺寸等可以称为物体的外形特征,换句话说,终端设备可以根据物体的外形特征,获取物 体对应的3D模型。应理解,在本申请实施例中,终端设备还可以通过其他方式获取该物体对应的3D模型,本申请实施例在此不做限制。
可选的,在本申请实施例中,在获取物体对应的3D模型之前,终端设备还可以通过利用摄像装置识别和/或定位物体。这里的摄像装置例如可以为摄像头或者摄像机等,或者还可以是其他类型的图像获取装置。例如,终端设备首先需要摄像头对显示物体进行拍照、扫描等,从而实现定位并且识别现实物体(或者也可以称为3D物体),确定现实物体在相机坐标系中的位置,通过3D对象检测与定位算法实现,输出现实物体的6个自由度的(6degree of freedom,6DoF)的姿态。通过该现实物体的6DoF姿态,可以将该现实物体的3D模型放置在与现实物体完全匹配的位置和姿态上。获得现实物体的6DoF姿态后,VR设备可以加载该现实物体的3D模型并根据现实物体的6DoF姿态进行放置,完成后,在相机坐标系中,现实物体位置上有一个3D数字模型处于同一位置。3D模型制作时需设定其中心为现实物体的六面体包围盒(Bounding Box)的中心,采用右手坐标系,正面方向为-Y方向,重力方向为-Z方向,图6所示的为一例六面体包围盒的示意图,如图6所示,长、宽、高分别定义为y/x/z轴方向现实物体尺寸。对于六面体包围盒,每一个现实物体都可以被一个六面体盒子包围,这个六面体盒子可以理解为该现实物体的六面体包围盒,六面体包围盒也可以理解为限位框或者边界框。六面体盒子的几何中心可以理解现实物体的中心。也就是说,将现实物体的几何中心确定为物体的3D模型的中心。
应理解,在本申请实施例中,现实物体的尺寸(或者大小)与该现实物体对应的3D模型的尺寸(或者大小)是成比例的。例如,现实物体的尺寸与该现实物体对应的3D模型的尺寸可以是成比例的缩小或者放大。举例说明:假设现实物体为一个长2米,宽1.5米的门,则显示在该终端设备上的现实物体对应的3D模型的实际尺寸可以为长0.2米、宽0.15米的门,该终端设备上显示的现实物体对应的3D模型上的标注尺寸可以为长2米,宽1.5米的门。即在该终端设备上显示的现实物体对应的3D模型上的标注尺寸可以和现实物体的尺寸相同。
还应理解,现实物体姿态与显示在终端设备上的现实物体对应的3D模型的姿态是相同的。
在获取现实物体的3D模型后,在S220中,可以获取物体的拟人化图案。
具体的,在S220中,终端设备中的应用可以自动或由用户手动选择2D的拟人化图案。
作为一种可能的实现方式,终端设备上的应用可以自动加载多个可用的官表情或者多个可用的虚拟四肢在设备上显示给用户,用户可以选择其中一个五官表情或者虚拟四肢作用于3D模型上。例如,终端设备结合识别后的现实物体的类型、尺寸、姿态等,确定多个可用的五官表情或者多个虚拟四肢,并呈现给用户,供用户选择,用户根据自己的喜好、3D模型大小、姿态等选择其中的一个五官表情或者虚拟四肢作用于该现实物体对应的3D模型上。其中,现实物体的类型可以理解为现实物体的具体种类,例如:瓶子、桌子、书本等任何种类。可选的,终端设备可以预存一张表格,该表格中包括不同类型的现实物体对应的不同的拟人化图案,并且,同一现实物体的不同姿态也可以对应不同的拟人化图案等,在确定现实物体,或者现实物体的姿态,便可以将与该现实物体对应的多个拟人化图案呈现给用户,供用户选择。
作为另一种可能的实现方式,终端设备可以根据现实物体的类型、尺寸、姿态等,自动确定作用于3D模型上的五官表情或者虚拟四肢。例如,终端设备可以预存一张表格,该表格中包括不同类型的现实物体对应的不同的拟人化图案。进一步的,同一现实物体的不同姿态也可以对应不同的拟人化图案等。终端设备结合识别后的现实物体的类型、尺寸、姿态等,可以自动确定作用于该现实物体3D模型上的拟人化图案。可选的,终端设备还可以结合用户的爱好或者用户之前选择的情况自动确定作用于现实物体3D模型上的拟人化图案。例如,如果终端设备识别的是某一个现实物体,终端设备可以将该用户之前选择的作用于相同的现实物体3D模型上的拟人化图案作为本次需要使用的拟人化图案,或者,从该用户之前选择的作用于相同的现实物体3D模型上的多个拟人化图案中,确定使用次数最多的拟人化图案为本次需要使用的拟人化图案等。本申请实施例对于终端设备自动确定作用于3D模型上的拟人化图案的过程不作限制。
可选的,可以在终端设备内部或者服务器上预先定义并存储大量的拟人化图案资源,拟人化图案资源包括多个或者多种拟人化图案。拟人化图案可以包括虚拟五官、虚拟四肢等。可选的,还可以在终端设备内部或者服务器上预先定义并存储大量的虚拟装饰物资源,虚拟装饰物资源可以包括虚拟五官、虚拟四肢上的装饰物等,装饰物例如可以是帽子、围巾、鞋子、衣服或者其他装饰品等。可选的,拟人化图案资源可以包括虚拟装饰物资源。在确定了现实物体后,直接从预存的拟人化图案资源中选择合适的拟人化图案,并且,从预存的虚拟装饰物资源中选择合适的虚拟装饰物添加到现实物体对应的3D模型上。即3D模型以及该3D模型上的拟人化图案是解耦的,针对不同的3D模型,不需要重新创建虚拟的五官以及四肢等。可以降低设计成本和复杂度。
在本申请实施例中,拟人化图案的格式可以包括拟人化图片或者拟人化图像交换格式(graphics interchange format,GIF)动画。应理解,在本申请实施例中,拟人化图案的格式还可以包括其它格式,本申请在此不作限制。
在S230中,可以根据物体的外形特征,确定该拟人化图案在该3D模型上的位置和投影大小。在本申请实施例中,物体的外形特征可以理解为物体图像中像素的数量,根据该物体图像中像素的数量,可以确定物体的长、宽、高,或者,可以确定长、宽、高中任意两个参数或者三个参数的比例。或者,物体的外形特征也可以理解为物体实际的长、宽、高中任意两个参数或者三个参数的比例。
在本申请实施例中,在从拟人化图案资源中确定出与现实物体对应的拟人化图案后,拟人化图案在3D模型上的位置以及在3D模型的投影面上的大小是根据该3D模型对应的现实物体的外形特征确定的。而不是随意确定或者根据不同美工喜好进行确定,这样可以使得拟人化图案与3D模型之间相互匹配与协调,使得拟人化图案在3D模型上的呈现更加准确生动,同时根据拟人动画的五官、肢体位置及大小的计算方法,可以获得主观体验上较好的拟人化虚拟形象。
本申请实施例中,3D模型的投影面可以理解为3D模型的六面体包围盒与拟人化图案平面平行的面。由于3D模型的六面体包围盒包括两个与拟人化图案平面平行的面,这里的投影面可以为靠近拟人化图案,并且与拟人化图案平面平行的面。例如,图6中所示的六面体包围盒的正面即为投影面。拟人化图案在3D模型的投影面上的大小可以理解为拟人化图案在3D模型的六面体包围盒与拟人化图案平面平行的面上的投影大小。
在确定了拟人化图案在3D模型上的位置以及投影大小后,在S240中,可以根据拟人化图案在3D模型上的位置和拟人化图案在3D模型的投影面上的投影大小,将拟人化图案渲染(或者也可以称为投影)到3D模型上,生成拟人化3D模型。即利用2D图像(拟人化图案)向3D物体进行渲染或者投影的方式使以2D图像呈现的五官可以完美的贴合在3D物体上,避免出现悬空或嵌入3D物体导致五官生成的效果差的问题。
在拟人化3D模型生成之后,便可以通过终端设备上的显示器或者显示装置显示给用户。
本申请提供的拟人化3D模型生成的方法,拟人化图案在3D模型上的位置以及拟人化的五官以及四肢等在3D模型上的投影大小是根据现实物体的外形特征确定的,可以使得拟人化图案在3D模型上的呈现更加准确生动。并且,通过投影的方式将拟人化图案渲染到物体对应的3D模型上,避免了出现悬空或者嵌入拟人化图案导致拟人化图案在3D模型上生成的效果较差的问题,快速实现物体拟人化的AR效果,降低五官表情以及肢体的动作设计复杂度,并且可以使得拟人化图案在3D模型上的呈现更加准确生动。
可选的,在本申请一些可能的实现方式中,如图7所示,图7是本申请一些实施例中拟人化3D模型生成的方法的示意性流程图,在图5所示的方法步骤的基础上,该方法中的S240,根据拟人化图案在3D模型上的位置和投影大小,将拟人化图案渲染到3D模型上,包括:S241。
S241,在渲染该拟人化图案时,根据拟人化图案的大小、以及该拟人化图案在该3D模型的投影面上的投影大小,确定(调整)该拟人化图案与该3D模型之间的距离、以及该拟人化图案与虚拟投影点之间的距离,以使得该拟人化图案在3D模型的投影面上的投影面大小和根据该目标物体的外形特征确定的拟人化图案的投影大小相同。
图7所示的步骤S210、S220、S230可以参考上述对S210、S220以及S230的相关描述,为了简洁,这里不再赘述。
在S241中,在将拟人化图案(例如为虚拟五官或者虚拟四肢)渲染到3D模型的投影面上时,由于在S230中,已经根据现实物体的外形特征确定了该拟人化图案在该3D模型的投影面上的投影大小和位置,因此在渲染或者投影时,可以根据虚拟五官或者虚拟四肢的大小、调整(或者确定)该虚拟五官或者虚拟四肢与该3D模型之间的距离、以及该虚拟五官或者虚拟四肢与虚拟投影点之间的距离,使虚拟五官或者虚拟四肢在3D模型的投影面的投影大小和S230中确定出的投影大小一致。
其中,虚拟投影点可以理解为虚拟投影装置:例如可以为虚拟相机,由该虚拟投影点点发射光线与拟人化图案相交并延伸到3D模型上,便可以在3D模型的投影面上投影拟人化图案。即采用光线投影的方式在3D模型上进行拟人化图案的投影,最终在3D模型上投影成像。拟人化图案的大小可以理解为应用自动或由用户手动选择的2D拟人化图案本身的大小。拟人化图案与3D模型之间的距离可以理解为拟人化图案与3D模型的投影面之间的距离。通过调整该拟人化图案与该3D模型之间的距离、以及该拟人化图案与虚拟投影点之间的距离,实现在3D模型上调整拟人化图案的大小,可以快速准确的将拟人化图像按照确定的大小投影或者渲染到3D模型上,提高渲染效率,并且,使得拟人化图案在3D模型上的呈现更加准确生动。
例如,图8所示的为一例将拟人化图案进行渲染时的俯视示意图。
如图8所示的,虚拟相机(或者也可以称为虚拟投影相机)可以理解为终端设备中的虚拟投影点,拟人化图案为虚拟五官图像,投影相机发射光线,其中,虚拟相机距离虚拟五官图像的距离为X 1,虚拟五官图像距离3D模型的投影面的距离为X 2,虚拟五官图像的大小W 1。在3D模型的正前方位置虚拟相机,在虚拟投影相机与3D模型之间放置2D的虚拟五官图片。虚拟相机发出的光线经过虚拟五官图片上任意一个像素点位置Pf,在3D模型表面上得到一个投影点Pt,Pf与Pt是在一条直线上。该投影点Pt的颜色与虚拟五官图片上颜色一致。则可以根据虚拟五官图像的大小,调整X 1以及X 2的大小,使得虚拟五官图像在3D模型投影面上的大小S与确定出的投影大小一致。例如,如图9所示的,图9为本申请实施例中的一例将虚拟五官图像投影到3D模型效果的示意图。
可选的,如果投影点Pt颜色为图片底色,可以预先将图片底色设为透明后在进行投影。
可选的,在本申请实施例中,除了可以调整虚拟五官图像在3D模型投影面上的大小之外,还可以调整整虚拟五官图像在3D模型投影面上位置,由于在S230中,已经确定了拟人化图案在该3D模型上的位置,因此,在渲染的过程中,可以调整拟人化图案的位置,使得拟人化图案在3D模型的投影位置与确定出的拟人化图案在该3D模型上的位置一致。例如,在图8所示的例子中,可以将拟人化图案的向上或者向下移动,或者,还可以将拟人化图案的向左或者向右移动,从而实现调整拟人化图案在3D模型的投影位置。
在本申请一些可能的实现方式中,拟人化图案在3D模型的投影面上的投影大小S满足如下公式(1):
Figure PCTCN2021070703-appb-000003
该拟人化图案的大小W 1,该拟人化图案与该3D模型的投影面之间的距离为X 2,该拟人化图案与该虚拟投影点之间的距离为X 1
具体的,在将拟人化图案投影到3D模型的过程中,由于拟人化图案的在3D模型上的投影大小需要与3D模型本身的大小相适应,从而获得主观体验上较好的拟人化虚拟形象。因此,在投影或者渲染的过程中,需要调整拟人化图案在3D模型投影面上的大小,使得其与之前根据3D模型对应的现实物体的外形特征确定的投影大小相适应。例如在图8所示的例子中,投影相机的位置、虚拟五官图片的大小Sf、投影相机与五官图片的距离X 1及五官图片与3D模型的距离X 2这些参数均作用到3D模型上的投影成像尺寸S,根据应用要求的St大小,可以对X 1以及X 2进行调整,使得拟人化图案在3D模型投影面上的大小满足要求。通过上述的公式(1)调整拟人化图案在3D模型上的投影大小,可以提高调整拟人化图案在3D模型上的投影大小的准确性以及效率,便于实现。
可选的,由于拟人化图案是用户自己选择或者应用自动选择的,拟人化图案本身的大小一般在用户自己选择或者应用自动选择后便已经确定了。但是,在本申请实施例中,可选的,在用户自己选择或者应用自动选择拟人化图像后,为了使得拟人化图案在3D模型投影面上的大小与3D模型本身的大小相适应,还可以通过调节拟人化图案本身大小W 1的方式来实现。
下面将介绍根据物体的外形特征,确定物体对应的拟人化图案在3D模型上的位置和拟人化图案在3D模型的投影面上的投影大小的具体过程。
在本申请实施例中,拟人化图案可以包括虚拟五官、虚拟四肢等,下文将以虚拟五官和虚拟四肢为例进行说明。还应理解,在本申请实施例中,虚拟五官的个数可以为一个或者多个,虚拟四肢的个数也可以为一个或者多个。
对于虚拟五官,可以先根据3D物体(即现实物体)的外形特征,选择人脸模型中五官的位置关系。
其中,人脸模型中五官的位置关系(即虚拟五官的比例关系)可以包括眼睛与头顶之间的距离与人头长度之间的比例关系,可以包括嘴巴与头顶之间的距离与人头长度之间的比例关系,还可以包括双眼之间的距离与人头宽度之间的比例关系,或者还可以包括其它五官与头顶之间的距离与人头长度之间的比例关系等,本申请实施例在此不作限制。应理解,人脸模型中五官的位置关系还可以用其它的方式进行表述,本申请在此不做限定。可选的,人脸模型中五官的位置关系可以通过统计大量的普通人的人脸中五官的位置关系获得,也可以根据经验人工设定,本申请在此不做限定。例如,普通人的眼睛与头顶之间的距离约为人头长度的1/2,嘴巴与头顶之间的距离约为人头长度7/9。
根据3D物体的外形特征,确定人脸模型中五官的位置关系。具体可以为:例如,当3D物体的宽高比满足范围条件A1时,眼睛与头顶之间的距离与人头长度的比例为B1;当3D物体的宽高比满足范围条件A2时,眼睛与头顶之间的距离与人头长度的比例为B2。
又例如,当3D物体的宽高比满足范围条件A3时,嘴巴与头顶之间的距离与人头长度的比例为C1;当3D物体的宽高比满足范围条件A4时,嘴巴与头顶之间的距离与人头长度的比例为C1。
又例如,当3D物体的宽高比满足范围条件A1时,眼睛与头顶之间的距离与人头长度的比例为B1,嘴巴与头顶之间的距离与人头长度的比例为C1;当3D物体的宽高比满足范围条件A2时,眼睛与头顶之间的距离与人头长度的比例为B2,嘴巴与头顶之间的距离与人头长度的比例为C2。
应理解,在本申请实施例中,确定不同的五官的位置关系(比例关系)可以使用不同的条件进行选择,也可以使用相同的条件进行选择。具体的实现方式除了上述列举的几种之外,还可以利用其他方式确定与3D物体对应的五官的位置关系,本申请在此不作限制。
在确定了人脸模型中五官的位置关系后,便可以根据3D物体的外形特征以及人脸模型中五官的位置关系,确定虚拟五官在3D物体对应的3D模型上的位置。例如,3D物体的宽高比满足范围条件A2,眼睛与头顶之间的距离与人头长度的比例为B2,嘴巴与头顶之间的距离与人头长度的比例为C2。3D模型的高度为Ho,那么眼睛在3D模型上的位置就可以用3D模型上眼睛与头顶之间的距离Heo来表示,其中,Heo=B2×Ho。嘴巴在3D模型上的位置就可以用3D模型上嘴巴与头顶之间的距离Hmo来表示,其中,Hmo=C2×Ho。例如,如图10所示的,3D模型的高度为Ho,眼睛和嘴巴在3D模型上的位置如图10所示的。
可选的,在本申请实施例中,还可以根据3D物体的外形特征、人脸模型中五官的比例关系,确定虚拟五官在3D模型的投影面上的投影大小。
例如,一般情况下,普通人双眼总宽度占脸部总宽度的3/5(用θ表示),θ可以根据应用需求(如艺术设计)来决定。假设用户自己选择或者应用自动选择的五官图像上双眼宽度为W 1,3D模型上放置虚拟五官的区域的宽度是W 2,则根据投影关系,假设虚拟投影 相机位置放置点距离五官图像为X 1五官图像距离3D模型的六面包围盒的投影面的距离为X 2,五官图像投影到3D模型的投影面上的大小为S,则满足如下公式(2)和公式(3):
Figure PCTCN2021070703-appb-000004
S=W 2×θ  (3)
根据公式(3),可以确定五官图像在3D模型的投影面上的大小S。在投影的过程中,根据公式(2),可以调整X 1以及X 2的值,使得五官图像在3D模型投影面上的投影大小为S。
对于虚拟四肢,首先根据3D物体(即现实物体)的外形特征,选择人体模型中四肢的位置关系(比例关系)。
人体模型中四肢的位置关系可以包括肩部到头顶的距离与身高之间的比例关系,可以包括腿部到头顶的距离与身高之间的比例关系。可选的,人体模型中四肢的位置关系还可以用其它的方式进行表述,本申请在此不做限定。可选的,人体模型中四肢的位置关系也可以通过统计大量的普通人的四肢与身高的关系获得,也可以根据经验人工设定,本申请在此不做限定。例如,普通人的肩部位置在身高的1/7的位置。
根据3D物体的外形特征,选择人体模型中四肢的位置关系。例如,当3D物体的宽高比满足范围条件A1时,肩部到头顶的距离与身高之间的比例为D1;当3D物体的宽高比满足范围条件A2时,肩部到头顶的距离与身高之间的比例为D2。
又例如,当3D物体的宽高比满足范围条件A2时,腿部到头顶的距离与身高之间的比例为E1;当3D物体的宽高比满足范围条件A3时,腿部到头顶的距离与身高之间的比例为E2。
应理解,在本申请实施例中,确定不同的四肢的位置关系(比例关系)可以使用不同的条件进行选择,也可以使用相同的条件进行选择。具体的实现方式除了上述列举的几种之外,还可以利用其他方式确定与3D物体对应的四肢的位置关系,本申请在此不作限制。
在确定了人体模型中四肢的位置关系后,根据3D物体的外形特征以及人体模型中四肢的位置关系,就可以确定虚拟四肢在3D物体上的位置。例如,3D物体的宽高比满足范围条件A2,肩部到头顶的距离与身高之间的比例为D2,腿部到头顶的距离与身高之间的比例为E1。3D物体的高度为Ho,那么肩部在3D模型上的位置就可以用3D模型上肩部到头顶的距离Hso来表示,其中,Hso=D2×Ho。那么腿部在3D模型上的位置就可以用3D模型上腿部到头顶的距离Hlo来表示,其中,Hlo=E1×Ho。肩部和腿部在3D模型上的位置如图11所示的。
可选的,在本申请实施例中,还可以根据3D物体的外形特征,选择人体模型中四肢的比例,确定虚拟四肢的长度。
人体模型中四肢的比例可以包括上肢的长度与身高之间的比例关系,也可以包括下肢的长度与身高之间的比例关系。可选的,人体模型中四肢的比例还可以用其它的方式进行表述,本申请在此不做限定。人体模型中四肢的比例关系可以通过统计大量的普通人的四肢与身高的关系获得,也可以根据经验人工设定,本申请在此不做限定。例如,普通人的上肢的长度为身高的1/3,下肢的长度为身高的3/5等。
进一步的,可以根据3D物体的外形特征,确定人体模型中虚拟四肢的长度。例如, 当3D物体的宽高比满足范围条件A1时,上肢的长度与身高的比例为F1,当3D物体的宽高比满足范围条件A2时,上肢的长度与身高的比例为F2。又例如,当3D物体的宽高比满足范围条件B1时,下肢的长度与身高的比例为G1,当3D物体的宽高比满足范围条件A2时,下肢的长度与身高的比例为G2。
在确定了虚拟四肢的比例关系后,便可以根据3D物体的外形特征以及虚拟四肢的比例关系,确定虚拟四肢在3D物体对应的3D模型上的位置。
可选的,在本申请实施例中,还可以根据3D物体的外形特征、虚拟五官在3D物体上的位置,确定虚拟四肢在3D物体上的位置。
例如,假设,普通人的眼睛位置处于人头长度的1/2位置,嘴巴位置在人头长度7/9位置。肩部位置在身高的1/7位置。
当1/2<3D物体宽高比<2时,在3D模型高度的1/2位置放置眼睛,在3D模型高度的7/9位置放置嘴巴。
当3D物体宽高比小于或者等于1/2时,在3D模型高度的1/4位置放置眼睛,在3D模型高度的1/2位置放置嘴巴。
当3D物体宽高比大于或者等于2时,在3D模型高度的1/2位置放置眼睛,在3D模型高度的3/4位置放置嘴巴。
虚拟上肢位置高度均与眼睛位置高度相同或偏下一些,虚拟上肢长度可以为1/3物体高度,虚拟下肢位置则在物体底部。
可选的,在本申请一些可能的实现方式中,除了将拟人化图案(例如虚拟五官、虚拟四肢等)投影在3D模型上,还可以在虚拟五官、虚拟四肢上添加虚拟装饰物。可选的,可以在VR设备内部或者服务器上预先定义并存储大量的虚拟装饰物资源,虚拟装饰物资源包括多个或者多种虚拟装饰物。例如帽子、围巾、衣服以及其他饰物等。应用可以自动或由用户手动选择2D的虚拟装饰物。虚拟装饰物在3D模型上的位置可以根据虚拟五官和/或虚拟四肢的位置来决定。例如虚拟帽子位于虚拟眼睛上方,还可根据虚拟据帽子的类型,确定虚拟帽子的下边沿与虚拟眼睛之间的距离。虚拟饰物的大小根据可以根据3D物体的大小进行选择和调整。
本申请提供的拟人化3D模型生成的方法,3D模型以及该3D模型上的拟人化图案是解耦的,针对不同的3D模型,不需要重新创建虚拟的五官以及四肢等。拟人化的五官以及四肢等在3D模型上的位置以及拟人化的五官以及四肢等大小是根据3D物体的外形特征确定的。通过现实物体的外形特征,确定与之匹配的现实中普通人的虚拟五官的比例关系和/或该虚拟四肢的比例关系,然后确定虚拟五官以及虚拟四肢在该3D模型上的位置,可以获得主观体验上较好的拟人化虚拟形象,并且可以使得拟人化图案在3D模型上的呈现更加准确。
下面结合具体的例子进行说明。
图12所示的为本申请的一例为虚拟五官投影后的效果的示意图。应用启动后,应用可以自动选择或由用户手动选择2D的五官动画,作为拟人动画的五官及表情。例如,如图13所示的,应用可以加载可用的五官表情呈现设备上显示给用户,用户可以选择其中一项作用于3D模型上。用户只需要选择单张五官的GIF动画或大于两张不同形态五官图案根据时间交替投影即可实现。然后根据现实物体的外形特征确定虚拟五官图案在3D模 型上的位置以及大小,通过投影的方式将虚拟五官图像投影到3D模型上,便可以生成如图12所示的在3D模型上呈现虚拟五官的效果。
图14所示的为本申请的另一例为虚拟五官投影后的效果的示意图。应用启动后,应用可以自动选择或由用户手动选择2D的五官动画,作为拟人动画的五官及表情。例如,如图15所示的,应用可以加载可用的五官表情呈现设备上显示给用户,用户可以选择其中一项作用于3D模型上。应理解,由于虚拟肢体是在现实物体的3D模型之外的部分,并且投影后是平面效果,其它角度观看效果不佳,因此,虚拟肢体一般采用三维模型而非二维的图案,虚拟肢体模型可以根据现实物体的主体颜色或用户选定的颜色进行纹理着色。然后根据现实物体的外形特征确定虚拟五官图案在3D模型上的位置以及大小,通过投影的方式将虚拟五官图像投影到3D模型上,便可以生成如图14所示的在3D模型上呈现虚拟五官的效果。
在确定了虚拟四肢模型在3D模型上位置之后,需要虚拟四肢模型与3D模型的连接区域。为了减少肢体运动导致的侵入3D模型的问题,可选的,如图15所示的,可以将肢体模型与3D模型的连接端采用球形或椭球连接。具体的,假设3D模型与肢体模型左侧连接点P a1的坐标为(X a1,Y a1,Z a1),左侧连接点P a2的坐标为(X a2,Y a2,Z a2)。当肢体模型与3D模型的连接端为半球形时,假设球半径为r,左侧连接端的球心P h1的坐标为(X h1,Y h1,Z h1),右侧连接端的球心P h2的坐标为(X h2,Y h2,Z h2),其中,Y a1=Y a2=Y h1=Y h2,Z a1=Z a2=Z h1=Z h2,X hl≤X a1-r,X h2≥X a2+r。
图16所示的为本申请的另一例为虚拟五官投影后的效果的示意图。图16所示为是除虚拟五官、四肢之外,还添加了饰物模型的效果示意图。应用启动后,应用可以自动选择或由用户手动选择2D的五官动画以及需要的饰品模型。例如,如图17所示的,应用可以加载可用的饰品模型呈现设备上显示给用户,用户可以选择其中一项作用于3D模型上。例如,用户选择的虚拟帽子位置位于现实物体对应的六面体包围盒最顶部的面(TOP面)的中心方式放置。然后根据现实物体的外形特征确定虚拟五官图案以及虚拟帽子在3D模型上的位置以及大小,通过投影的方式将虚拟五官图像以及虚拟帽子投影到3D模型上,便可以生成如图16所示的在3D模型上呈现虚拟五官以及饰品的效果。
应理解,上述只是为了帮助本领域技术人员更好地理解本申请实施例,而非要限制本申请实施例的范围。本领域技术人员根据所给出的上述示例,显然可以进行各种等价的修改或变化,例如,上述方法200的各个实施例中某些步骤可以是不必须的,或者可以新加入某些步骤等。或者上述任意两种或者任意多种实施例的组合。这样的修改、变化或者组合后的方案也落入本申请实施例的范围内。
还应理解,上文对本申请实施例的描述着重于强调各个实施例之间的不同之处,未提到的相同或相似之处可以互相参考,为了简洁,这里不再赘述。
还应理解,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
还应理解,本申请实施例中,“预先设定”、“预先定义”可以通过在设备(例如,包括终端和网络设备)中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。
还应理解,本申请实施例中的方式、情况、类别以及实施例的划分仅是为了描述的方 便,不应构成特别的限定,各种方式、类别、情况以及实施例中的特征在不矛盾的情况下可以相结合。
还应理解,在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
以上结合图5至图17对本申请实施例的拟人化3D模型生成的方法做了详细说明。以下,结合图18至图20对本申请实施例拟人化3D模型生成的装置进行详细说明。
图18示出了本申请实施例的拟人化3D模型生成的装置300的示意性框图,该装置300可以对应上述方法200中描述的终端设备、VR设备、AR设备、近眼显示设备、有显示3D动画功能的用户设备等。也可以是应用于终端设备、VR设备、AR设备、近眼显示设备、有显示3D动画功能的用户设备等的芯片或组件,并且,该装置300中各模块或单元分别用于执行上述方法200中所执行的各动作或处理过程。
如图18所示,该装置300可以包括处理单元310和显示单元320。
处理单元310,用于获取目标物体的3D模型。
该处理单元310,还用于获取该目标物体的拟人化图案。
该处理单元310,还用于根据该目标物体的外形特征确定该拟人化图案在该3D模型上的位置和投影大小;
该处理单元310,还用于根据该拟人化图案在该3D模型上的位置和投影大小,将该拟人化图案渲染到该3D模型上,生成拟人化3D模型。
显示单元320,用于向用户显示该拟人化3D模型。
本申请提供的拟人化3D模型生成的装置,拟人化图案在3D模型上的位置以及拟人化的五官以及四肢等在3D模型上的投影大小是根据现实物体的外形特征确定的,可以使得拟人化图案在3D模型上的呈现更加准确生动,同时根据拟人动画的五官、肢体位置及大小的计算方法,获得主观体验上较好的拟人化虚拟形象。并且,通过投影的方式将拟人化图案渲染到物体对应的3D模型上,避免了出现悬空或者嵌入拟人化图案导致拟人化图案在3D模型上渲染时效果较差的问题,快速实现物体拟人化的AR效果,降低五官表情以及肢体的动作设计复杂度,并且可以使得拟人化图案在3D模型上的呈现更加准确生动。
可选的,在本申请的一些实施例中,该处理单元310具体用于,在渲染该拟人化图案时,根据该拟人化图案的大小,确定该拟人化图案与该3D模型之间的距离、以及该拟人化图案与虚拟投影点之间的距离,以使得该拟人化图案在该3D模型的投影面上的投影面大小和根据该目标物体的外形特征确定的投影大小相同。
可选的,在本申请的一些实施例中,该拟人化图案在该3D模型上的投影面上的投影大小S满足如下条件:
Figure PCTCN2021070703-appb-000005
其中,该拟人化图案的大小为W 1,该拟人化图案与该3D模型的投影面之间的距离为X 2,该3D模型的投影面为该3D模型六面体包围盒与该拟人化图案所在的面平行的面,该拟人化图案与该虚拟投影点之间的距离为X 1。示例性的,该拟人化图案可以为虚拟五官或者虚拟四肢。
可选的,在本申请的一些实施例中,该拟人化图案包括:
虚拟五官、和/或,虚拟四肢。
可选的,在本申请的一些实施例中,该虚拟四肢的个数也可以为一个或者多个。
可选的,在本申请的一些实施例中,该处理单元310具体用于:
根据目标物体的外形特征,确定该虚拟五官和/或该虚拟四肢的比例关系;
根据该目标物体的外形特征,以及该虚拟五官的比例关系和/或该虚拟四肢的比例关系,确定该虚拟五官的比例关系和/或该虚拟四肢在该3D模型上的位置和投影大小。
其中,该虚拟五官的比例关系包括:眼睛与头顶之间的距离与人头长度之间的比例关系、嘴巴与头顶之间的距离与人头长度之间的比例关系、双眼之间的距离与人头宽度之间的比例关系中的至少一种,
该虚拟四肢的比例关系包括:肩部到头顶的距离与身高之间的比例关系、腿部到头顶的距离与身高之间的比例关系、上肢的长度与身高之间的比例关系、下肢的长度与身高之间的比例关系中的至少一种。
可选的,在本申请的一些实施例中,该处理单元310,还用于根据该虚拟五官和/或该虚拟四肢在该3D模型上的位置,确定该3D模型上的虚拟装饰物在该3D模型上的位置。可选的,该虚拟装饰物是在虚拟装饰物资源中人工选取或者自动选取的,该虚拟装饰物资源包括多个虚拟装饰物。示例性的,虚拟装饰物资源可以包括虚拟五官、虚拟四肢上的装饰物等,装饰物例如可以是帽子、围巾、鞋子、衣服或者其他装饰品等。例如,虚拟帽子位于虚拟眼睛上方,虚拟围巾位于虚拟头部的下方等。
可选的,在本申请的一些实施例中,该处理单元310具体用于:根据该目标物体,从预存的拟人化图案资源中人工选取或者自动选取该拟人化图案,该拟人化图案资源包括多个拟人化图案。
可选的,在本申请的一些实施例中,该处理单元310具体用于:本地调用该3D模型、外部调用该3D模型、或者生成该3D模型。
可选的,在本申请的一些实施例中,该拟人化图案的格式包括拟人化图片或者拟人化图像交换格式GIF动画中的至少一种。
可选的,在本申请的一些实施例中,该处理单元,还用于利用摄像装置识别和/或定位该物体。例如,利用摄像头拍照、扫描该物体,实现识别和/或定位该物体。
进一步的,该装置300还可以包括存储单元,存储单元用于存储拟人化图案资源以及虚拟装饰物资源等。可选的,存储单元还用于存储处理单元310和显示单元320执行的指令。处理单元(模块)310、显示单元(模块)320和存储单元相互耦合,存储单元(模块)存储指令,处理单元310用于执行存储单元存储的指令,显示单元320用于在处理单元310的驱动下执行显示功能。
可选的,存储拟人化图案资源以及虚拟装饰物资源还可以存储在云端的服务器中,该装置300可以从服务器中获取存储拟人化图案资源以及虚拟装饰物资源。
可选的,该装置300还可以包括物体识别与定位服务模块,例如,该物体识别与定位服务模块可以为摄像模块等。物体识别与定位服务模块用于对现实物体进行识别与定位的服务,输出现实物体的6DoF的姿态。
可选的,该装置300也可以不包括物体识别与定位服务模块。物体识别与定位服务模 块也可部署于云端的服务上。
应理解,装置300中各单元执行上述相应步骤的具体过程请参照前文中结合方法200、以及图5和图7中相关实施例中的描述,为了简洁,这里不加赘述。
还应理解,图17所示的通信装置300可以为终端设备(例如手机)、VR设备(例如VR眼镜)、AR设备、近眼显示设备、有显示3D动画功能的用户设备等。或者,终端设备、VR设备、AR设备、近眼显示设备、有显示3D动画功能的用户设备包括图18所示的通信装置300。
可选的,该装置300还可以包括摄像装置,例如,可以是摄像头等。
还应理解,本申请实施例中,处理单元310可以由处理器实现,存储单元可以由存储器实现,显示单元320可以由显示器实现,如图19所示,拟人化3D模型生成的装置400可以包括处理器410、存储器420和显示器430。
可选的,该装置400还可以包括摄像装置和显示装置,例如,可以是摄像头和显示器等。
可选的,该装置400还可以包括物体识别与定位服务模块,例如,该物体识别与定位服务模块可以为摄像模块等。
还应理解,以上装置中单元的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且装置中的单元可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分单元以软件通过处理元件调用的形式实现,部分单元以硬件的形式实现。例如,各个单元可以为单独设立的处理元件,也可以集成在装置的某一个芯片中实现,此外,也可以以程序的形式存储于存储器中,由装置的某一个处理元件调用并执行该单元的功能。这里该处理元件又可以称为处理器,可以是一种具有信号处理能力的集成电路。在实现过程中,上述方法的各步骤或以上各个单元可以通过处理器元件中的硬件的集成逻辑电路实现或者以软件通过处理元件调用的形式实现。
在一个例子中,以上任一装置中的单元或者模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个专用集成电路(application specific integrated circuit,ASIC),或,一个或多个数字信号处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA),或这些集成电路形式中至少两种的组合。再如,当装置中的单元可以通过处理元件调度程序的形式实现时,该处理元件可以是通用处理器,例如中央处理器(central processing unit,CPU)或其它可以调用程序的处理器。再如,这些单元可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
图20所示为本申请提供的一种终端设备的结构的示意性框图。以终端设备为手机为例,图20示出的是与本申请实施例相关的手机400的部分结构的框图。参考图20,手机400包括:射频(Radio Frequency,RF)电路410、电源420、处理器430、存储器440、输入单元450、显示单元460、摄像头170、音频电路480、以及无线保真(wireless fidelity,WiFi)模块490等部件。本领域技术人员可以理解,图20中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图20对手机400的各个构成部件进行具体的介绍:
RF电路410可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器130处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路410还可以通过无线通信与网络和其他设备通信。该无线通信可以使用任意一种通信标准或协议。在本申请实施例中,手机400可以通过RF电路410与云端的服务器进行通信,获取拟人化图案资源以及虚拟装饰物资源等。
存储器440可用于存储软件程序以及模块,处理器430通过运行存储在存储器440的软件程序以及模块,从而执行手机400的各种功能应用以及数据处理。存储器440可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机400的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器440可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。可选的,存储器440可以存储拟人化图案资源以及虚拟装饰物资源等。
输入单元450可用于接收输入的数字或字符信息,以及产生与手机400的用户设置以及功能控制有关的键信号输入。具体地,输入单元450可包括触控面板451以及其他输入设备452。触控面板451,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板451上或在触控面板451附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板451可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器430,并能接收处理器130发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板451。除了触控面板451,输入单元150还可以包括其他输入设备452。具体地,其他输入设备452可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。例如,在本申请实施例中,用户可以通过输入单元450在可用的多个五官图案中选择需要使用的五官图案。
显示单元460可用于显示由用户输入的信息或提供给用户的信息以及手机400的各种菜单。显示单元460可包括显示面板461,可选的,可以采用LCD、OLED等形式来配置显示面板461。进一步的,触控面板451可覆盖显示面板461,当触控面板451检测到在其上或附近的触摸操作后,传送给处理器430以确定触摸事件的类型,随后处理器430根据触摸事件的类型在显示面板461上提供相应的视觉输出。虽然在图19中,触控面板451与显示面板451是作为两个独立的部件来实现手机400的输入和输入功能,但是在某些实施例中,可以将触控面板451与显示面板461集成而实现手机400的输入和输出功能。例如,在本申请实施例中,显示单元460可以将最终在3D模型上生成的拟人化图案显示给用户。
手机100还可包括摄像头170,摄像头170用于获取图像或者时频资源的等。在本申请实施例中,摄像头170可以完成现实物体的识别与定位。
音频电路480、扬声器481,麦克风482可提供用户与手机400之间的音频接口。音频电路480可将接收到的音频数据转换后的电信号,传输到扬声器481,由扬声器481转换为声音信号输出;另一方面,麦克风482将收集的声音信号转换为电信号,由音频电路480接收后转换为音频数据,再将音频数据输出至RF电路410以发送给比如另一手机,或者将音频数据输出至存储器440以便进一步处理。在本申请实施例中,用户可以通过音频电路实现与3D拟人化动画之间的互动,例如,虚拟五官可以根据语音实现唇音同步及表情驱动,虚拟四肢亦能通过语音进行驱动等。
WiFi属于短距离无线传输技术,手机400通过WiFi模块490可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图19示出了WiFi模块490,但是可以理解的是,其并不属于手机400的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器430是手机400的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器440内的软件程序和/或模块,以及调用存储在存储器440内的数据,执行手机400的各种功能和处理数据,从而实现基于手机的多种业务。可选的,处理器430可包括一个或多个处理单元;优选的,处理器430可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器430中。在本申请实施例中,处理器430可以完成拟人化图案在3D模型投影面上的大小以及位置的计算。
手机400还包括给各个部件供电的电源420(比如电池),优选的,电源可以通过电源管理系统与处理器430逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗等功能。
尽管未示出,手机400还可以包括传感器、蓝牙模块等,在此不再赘述。
应理解,上述图20仅仅为本申请提供的手机的一种可能的结构,可以执行本申请提供的拟人化3D模型生成的方法的手机还可以是其他结构,本申请实施例在此不作限制。
应理解,本申请实施例中,该处理器可以为中央处理单元(central processing unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
还应理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的随机存取存储器(random access memory,RAM)可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、 双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
本申请实施例还提供了一种通信系统,该通信系统包括:上述的拟人化3D模型生成的装置和上述的服务器。
本申请实施例还提供了一种计算机可读介质,用于存储计算机程序代码,该计算机程序包括用于执行上述方法200中本申请实施例的拟人化3D模型生成的方法的指令。该可读介质可以是ROM)或RAM,本申请实施例对此不做限制。
本申请还提供了一种计算机程序产品,该计算机程序产品包括指令,当该指令被执行时,以使的拟人化3D模型生成的装置执行对应于上述方法的操作。
本申请实施例还提供了一种系统芯片,该系统芯片包括:处理单元和通信单元,该处理单元,例如可以是处理器,该通信单元例如可以是输入/输出接口、管脚或电路等。该处理单元可执行计算机指令,以使该通信装置内的芯片执行上述本申请实施例提供的任一种基于拟人化3D模型生成的方法。
可选地,上述本申请实施例中提供的任意一种通信装置可以包括该系统芯片。
可选地,该计算机指令被存储在存储单元中。
可选地,该存储单元为该芯片内的存储单元,如寄存器、缓存等,该存储单元还可以是该终端内的位于该芯片外部的存储单元,如ROM或可存储静态信息和指令的其他类型的静态存储设备,RAM等。其中,上述任一处提到的处理器,可以是一个CPU,微处理器,ASIC,或一个或多个用于控制上述的拟人化3D模型生成的方法的程序执行的集成电路。该处理单元和该存储单元可以解耦,分别设置在不同的物理设备上,通过有线或者无线的方式连接来实现该处理单元和该存储单元的各自的功能,以支持该系统芯片实现上述实施例中的各种功能。或者,该处理单元和该存储器也可以耦合在同一个设备上。
本文中术语“系统”和“网络”在本文中常被可互换使用。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
在本申请中可能出现的对各种消息/信息/设备/网元/系统/装置/动作/操作/流程/概念等各类客体进行了赋名,可以理解的是,这些具体的名称并不构成对相关客体的限定,所赋名称可随着场景,语境或者使用习惯等因素而变更,对本申请中技术术语的技术含义的理解,应主要从其在技术方案中所体现/执行的功能和技术效果来确定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种拟人化3D模型生成的方法,其特征在于,包括:
    获取目标物体的3D模型;
    获取所述目标物体的拟人化图案;
    根据所述目标物体的外形特征确定所述拟人化图案在所述3D模型上的位置和投影大小;
    根据所述拟人化图案在所述3D模型上的位置和投影大小,将所述拟人化图案渲染到所述3D模型上,生成拟人化3D模型。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述拟人化图案在所述3D模型上的位置和投影大小,将所述拟人化图案渲染到所述3D模型上,具体包括:
    在渲染所述拟人化图案时,根据所述拟人化图案的大小,确定所述拟人化图案与所述3D模型之间的距离、以及所述拟人化图案与虚拟投影点之间的距离,以使得所述拟人化图案在所述3D模型的投影面上的投影面大小和根据所述目标物体的外形特征确定的所述投影大小相同。
  3. 根据权利要求2所述的方法,其特征在于,所述拟人化图案包括虚拟五官,所述根据所述拟人化图案在所述3D模型上的位置和投影大小,将所述拟人化图案渲染到所述3D模型上,具体包括:
    在渲染所述虚拟五官时,根据所述虚拟五官的大小,确定所述虚拟五官与所述3D模型之间的距离、以及所述虚拟五官与虚拟投影点之间的距离,以使得所述虚拟五官在所述3D模型的投影面上的投影面大小和根据所述目标物体的外形特征确定的所述虚拟五官的投影大小相同。
  4. 根据权利要求2所述的方法,其特征在于,所述拟人化图案包括虚拟四肢,所述根据所述拟人化图案在所述3D模型上的位置和投影大小,将所述拟人化图案渲染到所述3D模型上,具体包括:
    在渲染所述虚拟四肢时,根据所述虚拟四肢的大小,确定所述虚拟四肢与所述3D模型之间的距离、以及所述虚拟四肢与虚拟投影点之间的距离,以使得所述虚拟四肢在所述3D模型的投影面上的投影面大小和根据所述目标物体的外形特征确定的所述虚拟四肢的投影大小相同。
  5. 根据权利要求2至4中任一项所述的方法,其特征在于,所述拟人化图案在所述3D模型的投影面上的投影大小S满足如下条件:
    Figure PCTCN2021070703-appb-100001
    其中,所述拟人化图案的大小为W 1,所述拟人化图案与所述3D模型的投影面之间的距离为X 2,所述3D模型的投影面为所述3D模型六面体包围盒与所述拟人化图案所在的面平行的面,所述拟人化图案与所述虚拟投影点之间的距离为X 1
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述拟人化图案包括虚拟五官和/或虚拟四肢,所述根据所述目标物体的外形特征确定所述拟人化图案在所述3D模型上的位置和投影大小,具体包括:
    根据所述目标物体的外形特征,确定所述虚拟五官的比例关系和/或所述虚拟四肢的比例关系;
    根据所述目标物体的外形特征,以及所述虚拟五官的比例关系和/或所述虚拟四肢的比例关系,确定所述虚拟五官和/或所述虚拟四肢在所述3D模型上的位置和投影大小;
    其中,所述虚拟五官的比例关系包括:眼睛与头顶之间的距离与人头长度之间的比例关系、嘴巴与头顶之间的距离与人头长度之间的比例关系、双眼之间的距离与人头宽度之间的比例关系中的至少一种,
    所述虚拟四肢的比例关系包括:肩部到头顶的距离与身高之间的比例关系、腿部到头顶的距离与身高之间的比例关系、上肢的长度与身高之间的比例关系、下肢的长度与身高之间的比例关系中的至少一种。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述获取所述目标物体的3D模型,具体包括:
    本地调用所述3D模型、外部调用所述3D模型、或者生成所述3D模型中的任意一种。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述获取所述目标物体的拟人化图案,具体包括:
    根据所述目标物体,从预存的拟人化图案资源中人工选取或者自动选取所述拟人化图案,所述拟人化图案资源包括多个拟人化图案。
  9. 一种拟人化3D模型生成的装置,其特征在于,包括:
    处理单元,用于获取目标物体的3D模型;
    所述处理单元,还用于获取所述目标物体的拟人化图案;
    所述处理单元,还用于根据所述目标物体的外形特征确定所述拟人化图案在所述3D模型上的位置和投影大小;
    所述处理单元,还用于根据所述拟人化图案在所述3D模型上的位置和投影大小,将所述拟人化图案渲染到所述3D模型上,生成拟人化3D模型。
  10. 根据权利要求9所述的装置,其特征在于,
    所述处理单元具体用于:在渲染所述拟人化图案时,根据所述拟人化图案的大小,确定所述拟人化图案与所述3D模型之间的距离、以及所述拟人化图案与虚拟投影点之间的距离,以使得所述拟人化图案在所述3D模型的投影面上的投影面大小和根据所述目标物体的外形特征确定的所述投影大小相同。
  11. 根据权利要求9所述的装置,其特征在于,所述拟人化图案包括虚拟五官,
    所述处理单元具体用于:在渲染所述虚拟五官时,根据所述虚拟五官的大小,确定所述虚拟五官与所述3D模型之间的距离、以及所述虚拟五官与虚拟投影点之间的距离,以使得所述虚拟五官在所述3D模型的投影面上的投影面大小和根据所述目标物体的外形特征确定的所述虚拟五官的投影大小相同。
  12. 根据权利要求9所述的装置,其特征在于,所述拟人化图案包括虚拟四肢,
    所述处理单元具体用于:在渲染所述虚拟四肢时,根据所述虚拟四肢的大小,确定所述虚拟四肢与所述3D模型之间的距离、以及所述虚拟四肢与虚拟投影点之间的距离,以使得所述虚拟四肢在所述3D模型的投影面上的投影面大小和根据所述目标物体的外形特征确定的所述虚拟四肢的投影大小相同。
  13. 根据权利要求10至12中任一项所述的装置,其特征在于,所述拟人化图案在所述3D模型上的投影面上的投影大小S满足如下条件:
    Figure PCTCN2021070703-appb-100002
    其中,所述拟人化图案的大小为W 1,所述拟人化图案与所述3D模型的投影面之间的距离为X 2,所述3D模型的投影面为所述3D模型六面体包围盒与所述拟人化图案所在的面平行的面,所述拟人化图案与所述虚拟投影点之间的距离为X 1
  14. 根据权利要求9至13中任一项所述的装置,其特征在于,所述拟人化图案包括虚拟五官和/或虚拟四肢,所述处理单元具体用于:
    根据所述目标物体的外形特征,确定所述虚拟五官和/或所述虚拟四肢的比例关系;
    根据所述目标物体的外形特征,以及所述虚拟五官的比例关系和/或所述虚拟四肢的比例关系,确定所述虚拟五官的比例关系和/或所述虚拟四肢在所述3D模型上的位置和投影大小;
    其中,所述虚拟五官的比例关系包括:眼睛与头顶之间的距离与人头长度之间的比例关系、嘴巴与头顶之间的距离与人头长度之间的比例关系、双眼之间的距离与人头宽度之间的比例关系中的至少一种,
    所述虚拟四肢的比例关系包括:肩部到头顶的距离与身高之间的比例关系、腿部到头顶的距离与身高之间的比例关系、上肢的长度与身高之间的比例关系、下肢的长度与身高之间的比例关系中的至少一种。
  15. 根据权利要求9至14中任一项所述的装置,其特征在于,
    所述处理单元具体用于:本地调用所述3D模型、外部调用所述3D模型、或者生成所述3D模型。
  16. 根据权利要求9至15中任一项所述的装置,其特征在于,
    所述处理单元具体用于:根据所述目标物体,从预存的拟人化图案资源中人工选取或者自动选取所述拟人化图案,所述拟人化图案资源包括多个拟人化图案。
  17. 一种通信装置,其特征在于,所述装置包括至少一个处理器,所述至少一个处理器与至少一个存储器耦合:
    所述至少一个处理器,用于执行所述至少一个存储器中存储的计算机程序或指令,以使得所述装置执行如权利要求1至8中任一项所述的方法。
  18. 一种终端设备,其特征在于,所述终端设备包括权利要求9至16中任一项所述的拟人化3D模型生成的装置。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序或指令,当计算机读取并执行所述计算机程序或指令时,使得计算机执行如权利要求1至8中任一项所述的方法。
PCT/CN2021/070703 2020-03-20 2021-01-07 拟人化3d模型生成的方法和装置 WO2021184932A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010201611.8A CN113436301B (zh) 2020-03-20 2020-03-20 拟人化3d模型生成的方法和装置
CN202010201611.8 2020-03-20

Publications (1)

Publication Number Publication Date
WO2021184932A1 true WO2021184932A1 (zh) 2021-09-23

Family

ID=77752469

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/070703 WO2021184932A1 (zh) 2020-03-20 2021-01-07 拟人化3d模型生成的方法和装置

Country Status (2)

Country Link
CN (1) CN113436301B (zh)
WO (1) WO2021184932A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373057A (zh) * 2021-12-22 2022-04-19 聚好看科技股份有限公司 一种头发与头部模型的匹配方法及设备
CN114779470A (zh) * 2022-03-16 2022-07-22 青岛虚拟现实研究院有限公司 一种增强现实抬头显示系统的显示方法
CN115471618A (zh) * 2022-10-27 2022-12-13 科大讯飞股份有限公司 重定向方法、装置、电子设备和存储介质
CN115526966A (zh) * 2022-10-12 2022-12-27 广州鬼谷八荒信息科技有限公司 一种用调度五官部件实现虚拟人物表情展现的方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053370A (zh) * 2020-09-09 2020-12-08 脸萌有限公司 基于增强现实的显示方法、设备及存储介质
CN116594531A (zh) * 2023-05-19 2023-08-15 如你所视(北京)科技有限公司 物体展示方法、装置、电子设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093490A (zh) * 2013-02-02 2013-05-08 浙江大学 基于单个视频摄像机的实时人脸动画方法
CN103716586A (zh) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 一种基于三维空间场景的监控视频融合系统和方法
CN109002185A (zh) * 2018-06-21 2018-12-14 北京百度网讯科技有限公司 一种三维动画处理的方法、装置、设备及存储介质
CN109410298A (zh) * 2018-11-02 2019-03-01 北京恒信彩虹科技有限公司 一种虚拟模型的制作方法及表情变化方法
US20190362547A1 (en) * 2018-05-23 2019-11-28 Asustek Computer Inc. Three-dimensional head portrait generating method and electronic device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229239B (zh) * 2016-12-09 2020-07-10 武汉斗鱼网络科技有限公司 一种图像处理的方法及装置
CN108495032B (zh) * 2018-03-26 2020-08-04 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093490A (zh) * 2013-02-02 2013-05-08 浙江大学 基于单个视频摄像机的实时人脸动画方法
CN103716586A (zh) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 一种基于三维空间场景的监控视频融合系统和方法
US20190362547A1 (en) * 2018-05-23 2019-11-28 Asustek Computer Inc. Three-dimensional head portrait generating method and electronic device
CN109002185A (zh) * 2018-06-21 2018-12-14 北京百度网讯科技有限公司 一种三维动画处理的方法、装置、设备及存储介质
CN109410298A (zh) * 2018-11-02 2019-03-01 北京恒信彩虹科技有限公司 一种虚拟模型的制作方法及表情变化方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG MENGSI: "Research on the Technology in Individual Virtual Human Animation", CHINESE MASTER'S THESES FULL-TEXT DATABASE (ELECTRONIC JOURNAL), INFORMATION SCIENCE AND TECHNOLOGY, 31 October 2010 (2010-10-31), XP055852314 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373057A (zh) * 2021-12-22 2022-04-19 聚好看科技股份有限公司 一种头发与头部模型的匹配方法及设备
CN114779470A (zh) * 2022-03-16 2022-07-22 青岛虚拟现实研究院有限公司 一种增强现实抬头显示系统的显示方法
CN115526966A (zh) * 2022-10-12 2022-12-27 广州鬼谷八荒信息科技有限公司 一种用调度五官部件实现虚拟人物表情展现的方法
CN115471618A (zh) * 2022-10-27 2022-12-13 科大讯飞股份有限公司 重定向方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN113436301B (zh) 2024-04-09
CN113436301A (zh) 2021-09-24

Similar Documents

Publication Publication Date Title
WO2021184932A1 (zh) 拟人化3d模型生成的方法和装置
US10609334B2 (en) Group video communication method and network device
US11443462B2 (en) Method and apparatus for generating cartoon face image, and computer storage medium
JP6638892B2 (ja) 画像及び深度データを用いて3次元(3d)人物顔面モデルを発生させるための仮想現実ベースの装置及び方法
CN109427083B (zh) 三维虚拟形象的显示方法、装置、终端及存储介质
US10055879B2 (en) 3D human face reconstruction method, apparatus and server
US11778002B2 (en) Three dimensional modeling and rendering of head hair
WO2020108404A1 (zh) 三维人脸模型的参数配置方法、装置、设备及存储介质
CN108876878B (zh) 头像生成方法及装置
CN110580733A (zh) 一种数据处理方法、装置和用于数据处理的装置
KR20180118669A (ko) 디지털 커뮤니케이션 네트워크에 기반한 지능형 채팅
CN110580677A (zh) 一种数据处理方法、装置和用于数据处理的装置
US11989348B2 (en) Media content items with haptic feedback augmentations
EP4315265A1 (en) True size eyewear experience in real-time
CN112449098B (zh) 一种拍摄方法、装置、终端及存储介质
US20230120037A1 (en) True size eyewear in real time
US20220319059A1 (en) User-defined contextual spaces
WO2022212144A1 (en) User-defined contextual spaces
KR102138620B1 (ko) 증강현실을 이용한 3d 모델 구현시스템 및 이를 이용한 구현방법
WO2023158370A2 (zh) 表情包生成方法及设备
CN117999115A (zh) 为多用户通信会话对准扫描环境
WO2023158375A2 (zh) 表情包生成方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21770944

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21770944

Country of ref document: EP

Kind code of ref document: A1