CN117058277A - Virtual object three-dimensional model generation method and device, medium and electronic equipment - Google Patents

Virtual object three-dimensional model generation method and device, medium and electronic equipment Download PDF

Info

Publication number
CN117058277A
CN117058277A CN202311029627.5A CN202311029627A CN117058277A CN 117058277 A CN117058277 A CN 117058277A CN 202311029627 A CN202311029627 A CN 202311029627A CN 117058277 A CN117058277 A CN 117058277A
Authority
CN
China
Prior art keywords
dimensional
animation
virtual object
model
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311029627.5A
Other languages
Chinese (zh)
Inventor
吴学锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311029627.5A priority Critical patent/CN117058277A/en
Publication of CN117058277A publication Critical patent/CN117058277A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The disclosure provides a method and a device for generating a three-dimensional model of a virtual object, a storage medium and electronic equipment, and relates to the technical field of computers. The method for generating the three-dimensional model of the virtual object comprises the following steps: acquiring a two-dimensional animation of a virtual object; generating a flaky model corresponding to the virtual object in a three-dimensional virtual scene based on the two-dimensional animation; and projecting the animation frame in the two-dimensional animation onto the sheet model to obtain the three-dimensional model of the virtual object. The method and the device improve the generation efficiency of the three-dimensional model of the virtual object.

Description

Virtual object three-dimensional model generation method and device, medium and electronic equipment
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method and device for generating a three-dimensional model of a virtual object, a computer readable storage medium and electronic equipment.
Background
The three-dimensional model is often applied to the fields of film, television, games, animation and the like, and can present a virtual scene effect with strong sense of reality by generating the three-dimensional model for the virtual object.
In the related art, a worker is generally required to manually make a three-dimensional model of a virtual object based on experience. Obviously, this method requires high labor cost and time cost, and is inefficient.
Disclosure of Invention
The disclosure provides a method and a device for generating a three-dimensional model of a virtual object, a computer readable storage medium and electronic equipment, so as to solve the problem of low generation efficiency of the three-dimensional model of the virtual object at least to a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a method for generating a three-dimensional model of a virtual object, including: acquiring a two-dimensional animation of the virtual object; generating a flaky model corresponding to the virtual object in a three-dimensional virtual scene based on the two-dimensional animation; and projecting the animation frame in the two-dimensional animation onto the sheet model to obtain the three-dimensional model of the virtual object.
According to a second aspect of the present disclosure, there is provided a generation apparatus of a virtual object three-dimensional model, including: an animation acquisition module configured to acquire a two-dimensional animation of the virtual object; the sheet model generation module is configured to generate a sheet model corresponding to the virtual object in a three-dimensional virtual scene based on the two-dimensional animation; and the animation projection module is configured to project animation frames in the two-dimensional animation onto the sheet model to obtain a three-dimensional model of the virtual object.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of generating a three-dimensional model of a virtual object of the first aspect described above and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and the memory is used for storing executable instructions of the processor. Wherein the processor is configured to perform the method of generating a three-dimensional model of a virtual object of the first aspect described above and possible implementations thereof via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
on one hand, the three-dimensional model of the virtual object is automatically generated based on the two-dimensional animation of the virtual object, compared with a mode of manually manufacturing the three-dimensional model, the labor cost and the time cost are reduced, the efficiency is greatly improved, the artificial deviation generated by manually manufacturing the three-dimensional model is avoided, and the accuracy of the model is improved. On the other hand, the method and the device can generate the virtual object to generate the corresponding three-dimensional model, effectively enhance the sense of reality of the virtual object and improve the user experience. In still another aspect, the method further includes projecting the animation frames in the two-dimensional animation to the sheet model to obtain the three-dimensional model of the virtual object, so that complexity of a generation process of the three-dimensional model of the virtual object is reduced, and efficiency is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 shows a system operation architecture of the present exemplary embodiment;
fig. 2 is a flowchart showing a method of generating a three-dimensional model of a virtual object in the present exemplary embodiment;
FIG. 3 shows a flowchart of one method of generating a sheet model in the present exemplary embodiment;
fig. 4 shows a flowchart of determining size information of a sheet model in the present exemplary embodiment;
fig. 5 is a flowchart showing another method of generating a three-dimensional model of a virtual object in the present exemplary embodiment;
fig. 6 is a schematic diagram showing a structure of a virtual object three-dimensional model generating apparatus in the present exemplary embodiment;
fig. 7 shows a schematic structural diagram of an electronic device in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
When a two-dimensional model of a virtual object is placed in a three-dimensional virtual scene, when the two-dimensional model is irradiated by a virtual light source in the three-dimensional scene, illumination effects such as shadows, fog effects and the like cannot be generated, and interaction with other three-dimensional virtual objects in the three-dimensional scene with strong sense of reality is difficult. In the related art, when the three-dimensional model of the virtual object is obtained, a worker is usually required to manually make the three-dimensional model of the virtual object, if the realization of the realistic expression effect is expected, the detail information of the virtual object needs to be described, and obviously, the method needs to consume higher labor cost and time cost, so that the generation efficiency of the three-dimensional model of the virtual object is low.
In view of one or more of the above problems, exemplary embodiments of the present disclosure first provide a method of generating a three-dimensional model of a virtual object. The system architecture of the operating environment of the present exemplary embodiment is described below in conjunction with fig. 1.
Referring to fig. 1, a system architecture 100 may include a terminal device 110 and a server 120. The terminal device 110 may be an electronic device such as a tablet computer, a notebook computer, or a desktop computer, and the terminal device 110 may be configured to obtain a two-dimensional animation of a virtual object. The server 120 generally refers to a background system that provides the target tracking related service in the present exemplary embodiment, and may be, for example, a server that implements a method of generating a three-dimensional model of a virtual object. Server 120 may be a server or a cluster of servers, which is not limited by this disclosure. The terminal device 110 and the server 120 may form a connection through a wired or wireless communication link for data interaction.
The method of generating a virtual object three-dimensional model in the present exemplary embodiment may be performed by the terminal device 110. For example, in the game scene, the virtual object may be a game character in the game scene, the two-dimensional animation may be a motion change video of the virtual object in the two-dimensional virtual scene, and if the virtual object needs to be applied to the three-dimensional scene, the terminal device 110 may project an animation frame of the two-dimensional animation onto the sheet model in the three-dimensional scene by executing the method for generating the three-dimensional model of the virtual object, so as to obtain the three-dimensional model of the virtual object.
In one embodiment, the terminal device 110 may obtain a two-dimensional animation of the virtual object, and send the two-dimensional animation to the server 120, after the server 120 receives the two-dimensional animation sent by the terminal device 110, generate a sheet model corresponding to the virtual object in the three-dimensional virtual scene based on the two-dimensional animation, and project an animation frame in the two-dimensional animation onto the sheet model, so as to obtain the three-dimensional model of the virtual object.
As can be seen from the above, the method for generating a three-dimensional model of a virtual object in the present exemplary embodiment may be performed by the terminal device 110 or the server 120 described above.
The method of generating the virtual object three-dimensional model will be described with reference to fig. 2. Fig. 2 shows an exemplary flow of a method of generating a three-dimensional model of a virtual object, including the following steps S210 to S230:
step S210, obtaining a two-dimensional animation of a virtual object;
step S220, generating a flaky model corresponding to the virtual object in the three-dimensional virtual scene based on the two-dimensional animation;
in step S230, the animation frame in the two-dimensional animation is projected onto the sheet model to obtain the three-dimensional model of the virtual object.
Based on the method, on one hand, the three-dimensional model of the virtual object is automatically generated based on the two-dimensional animation of the virtual object, compared with a manual three-dimensional model manufacturing mode, the labor cost and the time cost are reduced, the efficiency is greatly improved, the artificial deviation generated by manually manufacturing the three-dimensional model is avoided, and the accuracy of the model is improved. On the other hand, the method and the device can generate the virtual object to generate the corresponding three-dimensional model, effectively enhance the sense of reality of the virtual object and improve the user experience. In still another aspect, the method further includes projecting the animation frames in the two-dimensional animation to the sheet model to obtain the three-dimensional model of the virtual object, so that complexity of a generation process of the three-dimensional model of the virtual object is reduced, and efficiency is further improved.
Each step in fig. 1 is specifically described below.
Referring to fig. 1, in step S110, a two-dimensional animation of a virtual object is acquired.
Wherein the virtual object may be an object to be generated into a three-dimensional model, and the virtual object may include a game character in a game scene, or a tree, a building, etc. in a three-dimensional animation scene, as an example. The two-dimensional animation can be an animation video which only contains virtual objects and can show the motion changes and skeleton changes of the virtual objects, for example, the two-dimensional animation of the virtual objects can be obtained through 2D animation production software Spine, and compared with the traditional frame animation, the Spine can be used for producing animation of game roles and scenes, and can produce smooth and natural animation effects more quickly and efficiently by using a skeleton-based animation technology; meanwhile, the Spine can be integrated with various game engines, such as Unity, cocos2d-x and the like; therefore, when the two-dimensional animation of the virtual object is obtained, the two-dimensional animation video comprising the virtual object and the virtual scene can be produced through the Spine, and the virtual object in the two-dimensional animation video is subjected to image matting processing to obtain the two-dimensional animation of the virtual object, or the two-dimensional animation of the virtual object can be obtained by directly producing the animation video which is transparent in background and only contains the virtual object through the Spine.
In one embodiment, the obtaining the two-dimensional animation of the virtual object may include the following steps:
and generating skeleton animation of the virtual object according to the skeleton information and the action information of the virtual object to serve as a two-dimensional animation.
Wherein the bone information may characterize a bone change process of the virtual object; the motion information may characterize a course of motion change of the virtual object. The skeleton animation may be an animation video including a skeleton change process and a motion change process of the virtual object, and illustratively, skeleton information and motion information of the virtual object may be set in the 2D animation software Spine by a worker to obtain the skeleton animation.
Through the skeleton animation of the virtual object, the two-dimensional animation serving as the virtual object can completely display the action change and skeleton change of the virtual object, and the sense of reality of the three-dimensional model of the virtual object is further improved.
With continued reference to fig. 2, in step S220, a sheet model corresponding to the virtual object is generated in the three-dimensional virtual scene based on the two-dimensional animation.
The sheet model may be a carrier for displaying a two-dimensional animation of the virtual object in a three-dimensional space, for example, the plug may be rendered in Unity by a bulletin board rendering manner, so as to obtain the sheet model.
In one embodiment, the generating the sheet model corresponding to the virtual object in the three-dimensional virtual scene based on the two-dimensional animation, as shown in fig. 3, may include steps S310 to S320:
step S310, determining size information of the sheet model based on the bounding box of the two-dimensional animation;
step S320, generating a sheet model in the three-dimensional virtual scene according to the size information.
The bounding box of the two-dimensional animation comprises a bounding box corresponding to each animation frame to be projected in the two-dimensional animation; the animation frames to be projected may include all animation frames in the two-dimensional animation, or may include animation frames obtained by filtering redundant animation frames in the two-dimensional animation.
Based on the method of fig. 3, the size information of the sheet model is determined based on the bounding box of the two-dimensional animation, so that the sheet model can be ensured to display the two-dimensional animation completely, the authenticity of the three-dimensional model is ensured, and the complexity of the generation process of the sheet model is reduced.
In one embodiment, the determining the size information of the sheet model based on the bounding box of the two-dimensional animation, referring to fig. 4, may include steps S410 to S420:
step S410, acquiring bounding boxes corresponding to each animation frame to be projected according to the contour information of each animation frame to be projected in the two-dimensional animation, and determining the largest bounding box;
step S420, determining size information of the sheet model based on the size of the largest bounding box.
The size information of the sheet model may be the size of the largest bounding box, or may be the size obtained by enlarging the largest bounding box.
For example, according to the contour information of each animation frame to be projected in the two-dimensional animation, the bounding box corresponding to each animation frame to be projected can be accurately obtained, the largest bounding box is determined, and the size of the largest bounding box is determined as the size information of the sheet model.
Based on the method of fig. 4, the size information of the sheet model is determined according to the maximum bounding box size, so that each animation frame to be projected can be ensured to be completely displayed on the sheet model, the complexity of the method is reduced, the display effect of the three-dimensional model can be ensured, and the generation efficiency of the three-dimensional model of the virtual object is effectively improved.
In addition, in order to obtain that the generated three-dimensional model has an illumination effect such as a shadow in the three-dimensional virtual scene, in one embodiment, when generating the sheet model corresponding to the virtual object in the three-dimensional virtual scene based on the two-dimensional animation, the method may further include: and setting the transparency of the sheet model according to the preset transparency value.
The preset transparency value is a basis for setting transparency of the sheet model, and specific numerical values of the preset transparency value are not particularly limited in the present disclosure, for example, the preset transparency value may be 0.5.
By setting the sheet model to be transparent, the effect that the three-dimensional model receives illumination can be determined according to the shielding area formed by projecting the animation frame onto the sheet model in the subsequent step, and the sense of reality of the three-dimensional model of the virtual object in the three-dimensional virtual scene is further improved.
After the two-dimensional animation and the sheet model of the virtual object are acquired, with continued reference to fig. 2, in step S230, an animation frame in the two-dimensional animation may be projected onto the sheet model to obtain a three-dimensional model of the virtual object.
The three-dimensional model can be an object model formed by points, lines and planes in a three-dimensional space coordinate system, has three dimensions of length, width and height, and can be manufactured, edited and rendered by defining the length, width and height of the three-dimensional model; meanwhile, the three-dimensional model has the advantages of strong sense of reality, interactivity, high definition, flexibility and the like, is widely applied to the fields of modern computer graphics, movie production, game development, virtual reality, architectural design and the like, and in the field of game development, the three-dimensional model of a game object is generated, so that the sense of reality of a three-dimensional game scene can be further improved, and the immersion of a player is effectively improved.
In one embodiment, the projecting the animation frame in the two-dimensional animation onto the sheet model may include the following steps:
and rendering the animation frame to a rendering target to obtain texture information of the virtual object, and rendering the sheet model according to the texture information.
Wherein, the rendering Target (Render Target) may be a video memory buffer for rendering pixels, which may be understood as a map that may be dynamically written and read continuously; in the field of game development, rendering targets are often applied to implementation of special display effects such as multi-view rendering, image post-processing, screen dithering, reflection, shading, ambient light shielding, and the like.
By rendering the animation frames to the rendering target, the possibility that the animation frames in the two-dimensional animation are projected to the three-dimensional sheet model is provided, and the generation efficiency of the three-dimensional model of the virtual object is improved.
In order to further improve the generation efficiency of the three-dimensional model of the virtual object, the number of the used rendering targets can be reduced, in one implementation mode, the animation frames of the two-dimensional animation can be rendered to the target positions of the same rendering target, the target positions of the rendering targets are sampled to obtain texture information of the virtual object, and the sheet model is rendered according to the texture information.
Wherein the target position may be a rendering position of the animation frame in the rendering target. The sampling rendering target is a texture mapping technology commonly used in three-dimensional scenes, and abundant texture information and detail information can be obtained through the sampling rendering target.
For example, the two-dimensional animation may be refreshed to obtain an animation frame, the animation frame is rendered to the target position of the same rendering target to update the texture information of the target position in the rendering target, and then the sheet model is rendered according to the updated texture information to obtain the three-dimensional animation of the virtual object.
By rendering all the animation frames of the two-dimensional animation to the same rendering target, the number of the rendering targets can be reduced, and the operation pressure of the system can be reduced, so that the operation efficiency of the system can be improved, and the generation efficiency of the three-dimensional model of the virtual object can be further improved.
In order to further reduce the number of usage of the rendering targets, in one embodiment, the above-mentioned rendering the animation frames onto the rendering targets, obtaining texture information of the virtual objects, and rendering the sheet model according to the texture information may include the following steps:
under the condition that a plurality of virtual objects have the same animation frame, the same animation frame is rendered to the same rendering target, the target position of the same rendering target is sampled, so that texture information of the plurality of virtual objects is obtained, and a sheet model corresponding to the plurality of virtual objects is rendered according to the texture information of the plurality of virtual objects.
For example, in the case that the virtual object a and the virtual object B have the same animation frame, and the virtual object a and the virtual object B play the animation frame at the same time, if the virtual object a has already rendered the animation frame at the target position of the rendering target a, the virtual object B does not need to render the animation frame in the rendering target B, but can directly sample the target position of the rendering target a to obtain texture information corresponding to the animation frame, and render the sheet model B corresponding to the virtual object B through the texture information.
The same animation frames are rendered to the same rendering target, so that virtual objects with the same animation frames can share the texture information, the number of the rendering targets is further reduced, the frequency of rendering steps is reduced, and the generation efficiency of the three-dimensional model of the virtual objects is improved.
In a three-dimensional scene, the illumination effect in the real world can be simulated by setting a virtual light source, and the three-dimensional model can be shaded by the virtual light source, so that the three-dimensional model has a stereoscopic impression and a sense of reality, therefore, in order to make the generated three-dimensional model have the shading effect, in one embodiment, the three-dimensional virtual scene comprises the virtual light source, and after the animation frame in the two-dimensional animation is cast on the sheet-shaped model to obtain the three-dimensional model of the virtual object, the method can further comprise the following steps:
and determining the shadow effect of the three-dimensional model by combining the contour information of the occlusion region of the animation frame on the sheet model and the position relation between the sheet model and the virtual light source.
For example, the shadow contour of the three-dimensional model may be determined by contour information of an occlusion region of an image projected on the sheet model by the animation frame on the sheet model, and a display angle of the shadow contour may be updated based on a positional relationship between the sheet model and the virtual light source to determine a shadow effect of the three-dimensional model.
By generating the shadow effect of the three-dimensional model of the virtual object, the sense of reality of the three-dimensional model is enhanced, and the user experience is improved.
In addition, the sheet model is a three-dimensional model, so that interaction with other three-dimensional models in the three-dimensional virtual scene can be performed through collision detection of the sheet model, and the sense of reality of the three-dimensional model of the generated virtual object is further improved.
Since the animation frames are projected onto the sheet model to generate the three-dimensional model of the virtual object, in one embodiment, the three-dimensional virtual scene includes a virtual camera for capturing the virtual object, and after the animation frames in the two-dimensional animation are projected onto the sheet model to obtain the three-dimensional model of the virtual object, the method may further include the steps of:
and setting the shooting direction of the virtual camera according to the projection direction of the animation frame in the two-dimensional animation to the sheet model, so that the virtual camera always shoots the front surface of the three-dimensional model.
Wherein the projection direction refers to the direction in which the animation frame is projected onto the sheet model.
The shooting direction of the virtual camera is set through the projection direction, so that the projection direction is consistent with the shooting direction of the virtual camera, the virtual camera always shoots the front face of the three-dimensional model, the sense of reality of the three-dimensional model of the virtual object is ensured, and meanwhile, the complexity of the three-dimensional model generating method is further reduced.
In one embodiment, the projecting the animation frame in the two-dimensional animation onto the sheet model to obtain the three-dimensional model of the virtual object may further include the following steps:
under the condition that the current animation frame in the two-dimensional animation is projected onto the sheet model, if the position of the three-dimensional model corresponding to the next animation frame of the current animation frame is detected to be beyond the shooting range of the virtual camera, the current animation frame is kept projected onto the sheet model.
And if the position of the three-dimensional model exceeds the shooting range of the virtual camera, the animation frame projected on the sheet model is not updated any more, the projection effect of the current animation frame on the sheet model is maintained, and the performance consumption caused by excessive refreshing of the animation frame is avoided to a certain extent, so that the system operation pressure is reduced, and the three-dimensional model generation efficiency of the virtual object is improved.
Based on the method, the sense of reality of the generated three-dimensional model is improved, the labor cost and the time cost consumed in the process of generating the three-dimensional model are reduced, and the generation efficiency of the three-dimensional model of the virtual object is effectively improved.
In one embodiment, referring to fig. 5, a two-dimensional animation virtual manager, a rendering target virtual manager, a bounding box virtual manager, a sheet model manager, and a three-dimensional model manager may be deployed in a game engine, and based on steps S501 to S516, an effect upgrade may be performed on a game including a large number of two-dimensional game objects to produce a 3D game corresponding to the game:
step S501, generating a two-dimensional animation of a virtual object through Spine;
step S502, requesting to generate a corresponding bounding box for each animation frame to be projected;
step S503, obtaining corresponding bounding boxes according to the contour information of each animation frame to be projected;
step S504, obtaining the size of the maximum bounding box in the bounding boxes;
step S505, sending the maximum bounding box size;
step S506, generating a sheet model corresponding to the virtual object according to the maximum bounding box size and a preset transparency value;
step S507, sending a sheet model;
step S508, refreshing the two-dimensional animation to obtain an animation frame to be projected;
step S509, requesting to generate a corresponding rendering target;
step S510, inquiring whether the animation frame to be projected is already rendered to the rendering target; if yes, go to step S512, otherwise, go to step S511;
step S511, rendering the animation frame to be projected to a target position on a rendering target;
step S512, sampling the target position of the rendering target to obtain texture information of the animation frame to be projected;
step S513, sending texture information of the animation frame to be projected;
step S514, the texture information of the animation frame to be projected is rendered to the sheet model to generate a three-dimensional model of the virtual object;
step S515, determining a shadow effect of the three-dimensional model according to contour information of a shielding area of the animation frame to be projected on the sheet model and a position relation between the sheet model and the virtual light source;
step S516, a three-dimensional model generation completion instruction corresponding to the animation frame to be projected is sent.
Based on the method, when the 2D game effect is upgraded, the 2D game resource can be effectively utilized, and huge labor cost and time cost brought by game development are avoided to a certain extent when the 3D game effect corresponding to the 2D game is generated.
The exemplary embodiment of the disclosure also provides a device for generating the three-dimensional model of the virtual object. As shown in fig. 6, the apparatus 600 for generating a three-dimensional model of a virtual object may include:
an animation acquisition module 610 configured to acquire a two-dimensional animation of a virtual object;
a sheet model generation module 620 configured to generate a sheet model corresponding to a virtual object in a three-dimensional virtual scene based on a two-dimensional animation;
the animation projection module 630 is configured to project an animation frame in a two-dimensional animation onto the sheet model to obtain a three-dimensional model of the virtual object.
In one embodiment, the obtaining the two-dimensional animation of the virtual object may include:
and generating skeleton animation of the virtual object according to the skeleton information and the action information of the virtual object to serve as a two-dimensional animation.
In an embodiment, the generating the sheet model corresponding to the virtual object in the three-dimensional virtual scene based on the two-dimensional animation may include:
determining size information of the sheet model based on the bounding box of the two-dimensional animation;
and generating a sheet model in the three-dimensional virtual scene according to the size information.
In one embodiment, the determining the size information of the sheet model based on the bounding box of the two-dimensional animation may include:
acquiring bounding boxes corresponding to each animation frame to be projected according to the contour information of each animation frame to be projected in the two-dimensional animation, and determining the largest bounding box;
the size information of the sheet model is determined based on the size of the largest bounding box.
In one embodiment, when generating the sheet model corresponding to the virtual object in the three-dimensional virtual scene based on the two-dimensional animation, the apparatus may further include:
and setting the transparency of the sheet model according to the preset transparency value.
In one embodiment, the projecting the animation frame in the two-dimensional animation onto the sheet model may include:
and rendering the animation frame to a rendering target to obtain texture information of the virtual object, and rendering the sheet model according to the texture information.
In an embodiment, the rendering the animation frame onto the rendering target, obtaining texture information of the virtual object, and rendering the sheet model according to the texture information may include:
under the condition that a plurality of virtual objects have the same animation frame, the same animation frame is rendered to the same rendering target, the target position of the same rendering target is sampled, so that texture information of the plurality of virtual objects is obtained, and a sheet model corresponding to the plurality of virtual objects is rendered according to the texture information of the plurality of virtual objects.
In one embodiment, the three-dimensional virtual scene includes a virtual light source, and after projecting an animation frame in the two-dimensional animation onto the sheet model to obtain a three-dimensional model of the virtual object, the apparatus may further include:
and determining the shadow effect of the three-dimensional model by combining the contour information of the occlusion region of the animation frame on the sheet model and the position relation between the sheet model and the virtual light source.
In one embodiment, the three-dimensional virtual scene includes a virtual camera for capturing a virtual object, and after projecting an animation frame in a two-dimensional animation onto a sheet model to obtain a three-dimensional model of the virtual object, the apparatus may further include:
and setting the shooting direction of the virtual camera according to the projection direction of the animation frame in the two-dimensional animation to the sheet model, so that the virtual camera always shoots the front surface of the three-dimensional model.
In an embodiment, the projecting the animation frame in the two-dimensional animation onto the sheet model to obtain the three-dimensional model of the virtual object may further include:
under the condition that the current animation frame in the two-dimensional animation is projected onto the sheet model, if the position of the three-dimensional model corresponding to the next animation frame of the current animation frame is detected to be beyond the shooting range of the virtual camera, the current animation frame is kept projected onto the sheet model.
The specific details of each part in the above apparatus are already described in the method part embodiments, and thus will not be repeated.
Exemplary embodiments of the present disclosure also provide a computer readable storage medium, which may be implemented in the form of a program product comprising program code for causing an electronic device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the above section of the "exemplary method" when the program product is run on the electronic device. In an alternative embodiment, the program product may be implemented as a portable compact disc read only memory (CD-ROM) and comprises program code and may run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Exemplary embodiments of the present disclosure also provide an electronic device. The electronic device may include a processor and a memory. The memory stores executable instructions of the processor, such as program code. The processor performs the method of the present exemplary embodiment by executing the executable instructions.
An electronic device is illustrated in the form of a general purpose computing device with reference to fig. 7. It should be understood that the electronic device 700 shown in fig. 7 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 7, an electronic device 700 may include: processor 710, memory 720, bus 730, I/O (input/output) interface 740, and network adapter 750.
Processor 710 may include one or more processing units such as, for example: processor 710 may include a central processor (Central Processing Unit, CPU), AP (Application Processor ), modem processor, display processor (Display Process Unit, DPU), GPU (Graphics Processing Unit, graphics processor), ISP (Image Signal Processor ), controller, encoder, decoder, DSP (Digital Signal Processor ), baseband processor, artificial intelligence processor, and the like. In one embodiment, a two-dimensional animation of a virtual object may be acquired by an artificial intelligence processor; generating a flaky model corresponding to the virtual object in the three-dimensional virtual scene based on the two-dimensional animation; and finally, projecting the animation frame in the two-dimensional animation onto the sheet model to obtain the three-dimensional model of the virtual object.
The memory 720 may include volatile memory such as RAM 721, cache unit 722, and nonvolatile memory such as ROM 723. Memory 720 may also include one or more program modules 724, such program modules 724 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. For example, program modules 724 may include modules in apparatus 600 described above.
Bus 730 is used to enable connections between the different components of electronic device 700 and may include a data bus, an address bus, and a control bus.
The electronic device 700 may communicate with one or more external devices 800 (e.g., keyboard, mouse, external controller, etc.) through the I/O interface 740.
The electronic device 700 may communicate with one or more networks through the network adapter 750, e.g., the network adapter 750 may provide a mobile communication solution such as 3G/4G/5G, or a wireless communication solution such as wireless local area network, bluetooth, near field communication, etc. The network adapter 750 may communicate with other modules of the electronic device 700 over the bus 730.
Although not shown in fig. 7, other hardware and/or software modules may also be provided in electronic device 700, including, but not limited to: displays, microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. A method for generating a three-dimensional model of a virtual object, the method comprising:
acquiring a two-dimensional animation of the virtual object;
generating a flaky model corresponding to the virtual object in a three-dimensional virtual scene based on the two-dimensional animation;
and projecting the animation frame in the two-dimensional animation onto the sheet model to obtain the three-dimensional model of the virtual object.
2. The method of claim 1, wherein the obtaining a two-dimensional animation of the virtual object comprises:
and generating a skeleton animation of the virtual object according to the skeleton information and the action information of the virtual object to serve as the two-dimensional animation.
3. The method of claim 1, wherein generating the sheet model corresponding to the virtual object in the three-dimensional virtual scene based on the two-dimensional animation comprises:
determining size information of the sheet model based on the bounding box of the two-dimensional animation;
and generating the sheet model in the three-dimensional virtual scene according to the size information.
4. A method according to claim 3, wherein said determining size information of said sheet model based on said bounding box of said two-dimensional animation comprises:
acquiring bounding boxes corresponding to each animation frame to be projected according to the contour information of each animation frame to be projected in the two-dimensional animation, and determining the largest bounding box;
and determining the size information of the sheet model based on the size of the maximum bounding box.
5. The method of claim 1, wherein when generating the sheet model corresponding to the virtual object in a three-dimensional virtual scene based on the two-dimensional animation, the method further comprises:
and setting the transparency of the sheet model according to a preset transparency value.
6. The method of claim 1, wherein projecting the animation frame of the two-dimensional animation onto the sheet model comprises:
and rendering the animation frame to a rendering target to acquire texture information of the virtual object, and rendering the sheet model according to the texture information.
7. The method of claim 6, wherein the rendering the animation frame onto a rendering target, obtaining texture information for the virtual object, and rendering the sheet model based on the texture information, comprises:
and under the condition that a plurality of virtual objects have the same animation frame, rendering the same animation frame to the same rendering target, sampling the target position of the same rendering target to acquire texture information of the plurality of virtual objects, and rendering a plurality of sheet models corresponding to the virtual objects according to the texture information of the plurality of virtual objects.
8. The method of claim 1, wherein the three-dimensional virtual scene comprises a virtual light source, and wherein after the projecting the animation frame of the two-dimensional animation onto the sheet model to obtain the three-dimensional model of the virtual object, the method further comprises:
and determining the shadow effect of the three-dimensional model by combining the contour information of the blocking area of the animation frame on the sheet model and the position relation between the sheet model and the virtual light source.
9. The method of claim 1, wherein the three-dimensional virtual scene comprises a virtual camera for capturing the virtual object, the method further comprising, after the projecting the animation frame of the two-dimensional animation onto the sheet model to obtain the three-dimensional model of the virtual object:
and setting the shooting direction of the virtual camera according to the projection direction of the animation frame in the two-dimensional animation to the sheet model, so that the virtual camera always shoots the front surface of the three-dimensional model.
10. The method of claim 9, wherein projecting the animation frame of the two-dimensional animation onto the sheet model to obtain the three-dimensional model of the virtual object, further comprises:
and under the condition that the current animation frame in the two-dimensional animation is projected onto the sheet model, if the position of the three-dimensional model corresponding to the next animation frame of the current animation frame is detected to be beyond the shooting range of the virtual camera, the current animation frame is kept to be projected onto the sheet model.
11. A device for generating a three-dimensional model of a virtual object, the device comprising:
an animation acquisition module configured to acquire a two-dimensional animation of the virtual object;
the sheet model generation module is configured to generate a sheet model corresponding to the virtual object in a three-dimensional virtual scene based on the two-dimensional animation;
and the animation projection module is configured to project animation frames in the two-dimensional animation onto the sheet model to obtain a three-dimensional model of the virtual object.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 10.
13. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 10 via execution of the executable instructions.
CN202311029627.5A 2023-08-15 2023-08-15 Virtual object three-dimensional model generation method and device, medium and electronic equipment Pending CN117058277A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311029627.5A CN117058277A (en) 2023-08-15 2023-08-15 Virtual object three-dimensional model generation method and device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311029627.5A CN117058277A (en) 2023-08-15 2023-08-15 Virtual object three-dimensional model generation method and device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117058277A true CN117058277A (en) 2023-11-14

Family

ID=88662165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311029627.5A Pending CN117058277A (en) 2023-08-15 2023-08-15 Virtual object three-dimensional model generation method and device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117058277A (en)

Similar Documents

Publication Publication Date Title
CN111340928B (en) Ray tracing-combined real-time hybrid rendering method and device for Web end and computer equipment
CN111773709B (en) Scene map generation method and device, computer storage medium and electronic equipment
US9684997B2 (en) Efficient rendering of volumetric elements
CN112288873B (en) Rendering method and device, computer readable storage medium and electronic equipment
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
CN113661471A (en) Hybrid rendering
US20230120253A1 (en) Method and apparatus for generating virtual character, electronic device and readable storage medium
CN113436343A (en) Picture generation method and device for virtual studio, medium and electronic equipment
CN112053370A (en) Augmented reality-based display method, device and storage medium
CN112734896B (en) Environment shielding rendering method and device, storage medium and electronic equipment
JP7277548B2 (en) SAMPLE IMAGE GENERATING METHOD, APPARATUS AND ELECTRONIC DEVICE
CN112652046A (en) Game picture generation method, device, equipment and storage medium
WO2022143367A1 (en) Image rendering method and related device therefor
CN111275824A (en) Surface reconstruction for interactive augmented reality
CN110930492B (en) Model rendering method, device, computer readable medium and electronic equipment
CN113453073A (en) Image rendering method and device, electronic equipment and storage medium
US8390623B1 (en) Proxy based approach for generation of level of detail
CN117372602B (en) Heterogeneous three-dimensional multi-object fusion rendering method, equipment and system
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN115970275A (en) Projection processing method and device for virtual object, storage medium and electronic equipment
CN117058277A (en) Virtual object three-dimensional model generation method and device, medium and electronic equipment
WO2023035509A1 (en) Grid generation method and apparatus, electronic device, computer-readable storage medium, computer program and computer program product
CN114119831A (en) Snow accumulation model rendering method and device, electronic equipment and readable medium
KR20230013099A (en) Geometry-aware augmented reality effects using real-time depth maps
CN112465692A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination