CN113473211A - Virtual gift processing method and device, storage medium and electronic equipment - Google Patents

Virtual gift processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113473211A
CN113473211A CN202110959883.9A CN202110959883A CN113473211A CN 113473211 A CN113473211 A CN 113473211A CN 202110959883 A CN202110959883 A CN 202110959883A CN 113473211 A CN113473211 A CN 113473211A
Authority
CN
China
Prior art keywords
model
virtual gift
gift
name
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110959883.9A
Other languages
Chinese (zh)
Other versions
CN113473211B (en
Inventor
沈志铭
黄小龙
曹小和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202110959883.9A priority Critical patent/CN113473211B/en
Publication of CN113473211A publication Critical patent/CN113473211A/en
Application granted granted Critical
Publication of CN113473211B publication Critical patent/CN113473211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The disclosure belongs to the technical field of live broadcast, and relates to a processing method and device of a virtual gift, a storage medium and electronic equipment. The method comprises the following steps: acquiring a virtual gift model, and uniformly dividing the virtual gift model to obtain a model name and a model part corresponding to the model name; and carrying out model customization processing on the model part according to the model name to update the virtual gift model. The utility model discloses a compatible effect of one-to-many that virtual gift model handled has been reached, the development efficiency and the iteration efficiency of virtual gift model processing mode have been promoted, the virtual gift model has been solved, the special effect and the aspect such as demonstration are too simple defect, the customization effect and the function show of virtual gift model have been optimized, the application scene of virtual gift model customization has been enriched, virtual gift model and relevant scene have been satisfied and effective and comfortable demand of binding has been carried out, the richness and the interest of virtual gift model have been promoted, user's presentation experience and use experience have been optimized to a certain extent.

Description

Virtual gift processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of live broadcast technologies, and in particular, to a method and an apparatus for processing a virtual gift, a computer-readable storage medium, and an electronic device.
Background
In various platforms, a scenario in which a user gives a gift often occurs. Particularly, in a live broadcast platform, when the gift is presented for a certain amount of money, the gift-presenting satisfaction of the user can be enhanced due to the large special effect of the gift. The existing gift big special effect is an animation frame special effect supporting a transparent channel, and a designed fixed animation resource is usually firstly used, and then a computer technology is used for rendering and playing frame by frame to each user in a live broadcast room for watching.
However, the method for generating the large gift special effect can only support the customized special effect of three ways of pasting pictures, colors and adding documentations, so that the customized effect and the function are not good, corresponding program codes need to be developed for each gift, one-to-many compatibility cannot be achieved, and the labor cost and the development cost are greatly wasted.
In view of the above, there is a need in the art to develop a new method and apparatus for processing virtual gifts.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a method for processing a virtual gift, a device for processing a virtual gift, a computer-readable storage medium, and an electronic device, thereby overcoming, at least to some extent, the problems of poor customization effect and poor compatibility of the virtual gift due to the limitations of the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of embodiments of the present invention, there is provided a method of processing a virtual gift, the method including:
obtaining a virtual gift model, and carrying out unified division processing on the virtual gift model to obtain a model name and a model part corresponding to the model name;
and carrying out model customization processing on the model part according to the model name to update the virtual gift model.
In an exemplary embodiment of the present invention, the model name includes: the name of the map, the color of the model, the name of the head portrait and the pattern.
In an exemplary embodiment of the present invention, the updating the virtual gift model by the model customization process of the model part according to the model name includes:
if the model name is the map name, obtaining a model map corresponding to the map name;
and determining the model part corresponding to the map name, and performing model customization processing on the model part by using the model map to update the virtual gift model.
In an exemplary embodiment of the present invention, the updating the virtual gift model by the model customization process of the model part according to the model name includes:
if the model name is the model color, acquiring an original color and a target color corresponding to the model color and a pixel value of the virtual gift model;
and calculating the color difference value of the original color and the target color to obtain the color difference value, and updating the virtual gift model by using the pixel value and the color difference value.
In an exemplary embodiment of the present invention, the updating the virtual gift model by the model customization process of the model part according to the model name includes:
if the model name is the avatar name, acquiring an avatar option corresponding to the avatar name;
and determining the model part corresponding to the avatar name, and performing model customization processing on the model part by using the avatar option to update the virtual gift model.
In an exemplary embodiment of the invention, the avatar option includes: anchor avatar, user avatar, and fusion avatar.
In an exemplary embodiment of the present invention, the updating the virtual gift model by the model customization process of the model part according to the model name includes:
if the model name is the model file, acquiring file information corresponding to the model file;
and determining the model part corresponding to the model file, and performing model customization processing on the model part by using the file information to update the virtual gift model.
In an exemplary embodiment of the invention, the method further comprises:
acquiring a panoramic picture, and generating a sky box based on the panoramic picture;
rendering the updated virtual gift model with the sky box.
In an exemplary embodiment of the invention, the method further comprises:
obtaining a model texture pattern of the virtual gift model, and obtaining a sequence frame picture corresponding to the model name;
sequentially configuring the sequence frame pictures to obtain a rendering sequence of the sequence frame pictures;
and rendering the model texture pattern by using the sequence frame pictures based on the rendering sequence to obtain the updated virtual gift model.
In an exemplary embodiment of the invention, the method further comprises:
and saving the updated virtual gift model according to a specified size.
In an exemplary embodiment of the present invention, the saving the updated virtual gift model in a designated size includes:
drawing a model base corresponding to the updated virtual gift model;
and storing the updated virtual gift model according to a specified size based on the model base.
In an exemplary embodiment of the invention, the method further comprises:
acquiring an original gift picture corresponding to the virtual gift model;
and updating the original gift picture according to the updated virtual gift model to obtain a target gift picture.
In an exemplary embodiment of the invention, the method further comprises:
obtaining the amount of the existing resources, and carrying out gift giving processing on the updated virtual gift model to obtain the amount of the gift to be given;
and carrying out presentation agreement judgment on the existing resource amount according to the presentation gift amount to obtain a presentation judgment result, and generating presentation prompt information according to the presentation judgment result.
In an exemplary embodiment of the present invention, the presentation hint information includes gift special effect information and presentation failure information,
the generating presentation prompt information according to the presentation determination result includes:
if the present judgment result is that the existing resource amount is larger than or equal to the present gift amount, performing special effect display judgment on the present gift amount to obtain a special effect judgment result, and generating updated gift special effect information of the virtual gift model according to the special effect judgment result;
and if the present judgment result is that the existing resource amount is smaller than the present gift amount, generating present failure information corresponding to the updated virtual gift model.
In an exemplary embodiment of the present invention, the determining the special effect display of the gifted gift amount to obtain a special effect determination result, and generating the updated gift special effect information of the virtual gift model according to the special effect determination result includes:
obtaining special effect resource amount corresponding to the gift special effect information, and performing special effect display judgment on the donation resource amount by using the special effect resource amount to obtain a special effect judgment result;
and if the special effect judgment result is that the donation resource amount is larger than the special effect resource amount, generating the updated gift special effect information of the virtual gift model.
In an exemplary embodiment of the invention, the method further comprises:
obtaining a target model name in the model names;
rendering a video stream of the virtual gift model at a model site corresponding to the target model name.
According to a second aspect of embodiments of the present invention, there is provided an apparatus for processing a virtual gift, the apparatus comprising:
the model dividing module is configured to obtain a virtual gift model, and uniformly divide the virtual gift model to obtain a model name and a model part corresponding to the model name;
and the model customizing module is configured to perform model customizing processing on the model part according to the model name to update the virtual gift model.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus including: a processor and a memory; wherein the memory has stored thereon computer readable instructions which, when executed by the processor, implement a method of processing a virtual gift as in any of the exemplary embodiments described above.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of processing a virtual gift in any of the above-described exemplary embodiments.
As can be seen from the foregoing technical solutions, the processing method of the virtual gift, the processing apparatus of the virtual gift, the computer storage medium and the electronic device in the exemplary embodiment of the present disclosure have at least the following advantages and positive effects:
in the method and the device provided by the exemplary embodiment of the disclosure, on one hand, the problems of labor cost and development cost waste caused by the fact that corresponding program codes are developed for each virtual gift are solved through the unified division processing of the virtual gift model, the one-to-many compatible effect of the virtual gift model processing is achieved, and the development efficiency and the iteration efficiency of the virtual gift model processing mode are improved; on the other hand, model customization processing is carried out on the model part according to the model name, the defect that the aspects of the virtual gift model, special effects, display and the like are too single and simple is overcome, and the customization effect and function display of the virtual gift model are optimized. Furthermore, the virtual gift model can be applied to live broadcast or other related scenes, application scenes customized by the virtual gift model are enriched, the requirement for effectively and comfortably binding the virtual gift model and the related scenes is met, the richness and interest of the virtual gift model are improved, and the presenting experience and the using experience of a user are optimized to a certain degree.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 schematically illustrates a flow chart of a method of processing a virtual gift in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of a first model customization process in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method of a second model customization process in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a method of a third model customization process in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart of a method of a fourth model customization process in an exemplary embodiment of the present disclosure;
fig. 6 schematically illustrates a flow chart of a method of rendering a virtual gift model using a sky box in an exemplary embodiment of the present disclosure;
fig. 7 schematically illustrates a flowchart of a method of rendering a virtual gift model using sequential frame pictures in an exemplary embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow chart of a method of saving an updated virtual gift model in an exemplary embodiment of the present disclosure;
fig. 9 schematically illustrates a flow chart of a method of updating a gift picture in an exemplary embodiment of the present disclosure;
FIG. 10 schematically illustrates a flow chart of a method of giveaway protocol determination in an exemplary embodiment of the disclosure;
FIG. 11 is a flow chart that schematically illustrates a method of generating comp prompt information based on a comp determination result in an exemplary embodiment of the present disclosure;
FIG. 12 schematically illustrates a flow chart of a method of special effects display determination in an exemplary embodiment of the disclosure;
fig. 13 schematically illustrates a flow chart of a method of embedding a video stream rendering a virtual gift model in an exemplary embodiment of the present disclosure;
FIG. 14 schematically illustrates an interface schematic showing a first virtual luxury vehicle gift in an exemplary embodiment of the present disclosure;
FIG. 15 schematically illustrates an interface schematic showing a second virtual luxury vehicle gift in an exemplary embodiment of the present disclosure;
FIG. 16 is a schematic illustration of an interface of a live room to present a virtual luxury car gift in an exemplary embodiment of the present disclosure;
FIG. 17 is a schematic diagram illustrating an interface within a live room for displaying a virtual luxury car gift to be retrofitted in an exemplary embodiment of the present disclosure;
FIG. 18 schematically illustrates an interface diagram showing a virtual luxury car gift to be retrofitted in an exemplary embodiment of the present disclosure;
FIG. 19 schematically illustrates another interface diagram showing a virtual luxury vehicle gift to be retrofitted according to an exemplary embodiment of the present disclosure;
FIG. 20 schematically illustrates yet another interface diagram showing a virtual luxury car gift to be retrofitted according to an exemplary embodiment of the present disclosure;
FIG. 21 is an interface diagram that schematically illustrates a method of model customization processing, in an exemplary embodiment of the present disclosure;
FIG. 22 is an interface diagram that schematically illustrates another method of model customization processing in an exemplary embodiment of the present disclosure;
FIG. 23 is an interface diagram that schematically illustrates a method of another model customization process for a virtual luxury vehicle gift in an exemplary embodiment of the present disclosure;
FIG. 24 is an interface diagram that schematically illustrates a method of another model customization process for another virtual luxury vehicle gift in an exemplary embodiment of the present disclosure;
FIG. 25 is a schematic illustration of an interface for successful modification of a virtual luxury car gift in an exemplary embodiment of the present disclosure;
fig. 26 is a schematic structural diagram illustrating a processing apparatus of a virtual gift in an exemplary embodiment of the present disclosure;
fig. 27 schematically illustrates an electronic device for implementing a processing method of a virtual gift in an exemplary embodiment of the present disclosure;
fig. 28 schematically illustrates a computer-readable storage medium for implementing a processing method of a virtual gift in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/parts/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first" and "second", etc. are used merely as labels, and are not limiting on the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
In view of the problems in the related art, the present disclosure provides a method for processing a virtual gift, and fig. 1 shows a flowchart of the method for processing a virtual gift, and as shown in fig. 1, the method for processing a virtual gift at least includes the following steps:
and S110, acquiring the virtual gift model, and uniformly dividing the virtual gift model to obtain a model name and a model part corresponding to the model name.
And S120, carrying out model customization processing on the model part according to the model name to update the virtual gift model.
In the exemplary embodiment of the disclosure, on one hand, the problems of labor cost and development cost waste caused by the fact that corresponding program codes are developed for each virtual gift are solved through the unified division processing of the virtual gift model, so that the one-to-many compatible effect of virtual gift model processing is achieved, and the development efficiency and the iteration efficiency of the virtual gift model processing mode are improved; on the other hand, model customization processing is carried out on the model part according to the model name, the defect that the aspects of the virtual gift model, special effects, display and the like are too single and simple is overcome, and the customization effect and function display of the virtual gift model are optimized. Furthermore, the virtual gift model can be applied to live broadcast or other related scenes, application scenes customized by the virtual gift model are enriched, the requirement for effectively and comfortably binding the virtual gift model and the related scenes is met, the richness and interest of the virtual gift model are improved, and the presenting experience and the using experience of a user are optimized to a certain degree.
The respective steps of the processing method of the virtual gift will be described in detail below.
In step S110, a virtual gift model is obtained, and the virtual gift model is subjected to a uniform division process to obtain a model name and a model portion corresponding to the model name.
In an exemplary embodiment of the present disclosure, the virtual gift may be a gift that can be given to a main broadcast by a viewer during a live broadcast viewing process, and then, the virtual gift model is a three-dimensional model or a two-dimensional model of the gift, and the present exemplary embodiment is not particularly limited to this.
For example, the virtual gift model may be a luxury vehicle gift model, a rocket gift model, or the like. In addition, when the virtual gift model is a luxury vehicle gift model, the virtual gift model may be any luxury vehicle gift model, and the processing mode of all the luxury vehicle gift models is applicable to the subsequent virtual gift model without limiting which specific luxury vehicle gift model is.
Furthermore, mesh grids of the virtual gift model can be set according to the appointed model name for the virtual gift model, and a model part corresponding to the model name is obtained.
In an alternative embodiment, the model name includes: the name of the map, the color of the model, the name of the head portrait and the pattern.
Wherein, the map name may be a model name of the mesh name car _ door; the model color may be the mesh name car body or the model name of car color; the avatar name may be the name of the steering wheel mesh of the luxury car gift model; the model document may be the name of the license plate mesh of the luxury vehicle gift model.
Correspondingly, when the name of the map is car _ door, the corresponding model part is a car door; when the color of the model is car _ body, the corresponding model part is the car body; when the head image is named as the name of the steering wheel mesh of the luxury vehicle gift model, the corresponding model part is the steering wheel; when the model file is the name of the license plate mesh of the luxury vehicle gift model, the corresponding model part is the license plate.
The model which can realize the model customization processing of the virtual gift model, such as the model part corresponding to the chartlet name, can be determined as a modification model, the model part corresponding to the model file, such as the license plate, can be determined as a special effect model, and the special effect model can be used for rendering the relevant special effect of the virtual gift model. In addition, the license plate model portion may be determined as not only a special effect model but also a modified model, which is not particularly limited in this exemplary embodiment.
In step S120, the model part is subjected to model customizing processing according to the model name to update the virtual gift model.
In an exemplary embodiment of the present disclosure, after the model names and the model parts corresponding to the model names are obtained through the unified partition processing, the corresponding model parts may be subjected to model customization processing according to mesh transformation capabilities given to different model names.
In an alternative embodiment, fig. 2 shows a flow diagram of a method of a first model customization process, which, as shown in fig. 2, comprises at least the following steps: in step S210, if the model name is a map name, a model map corresponding to the map name is obtained.
When the model name is a map name of car _ door, the mesh map selected by the user, that is, the model map corresponding to the map name, can be obtained.
In step S220, a model part corresponding to the map name is determined, and the model part is subjected to model customization processing using the model map to update the virtual gift model.
After the model name is determined, the model part corresponding to the model name of car _ door can be determined as the door, and therefore, the door can be subjected to model customization processing by using the mesh map.
Specifically, a model map specified by the user may be attached to the door of the vehicle to update the luxury vehicle gift model.
In the exemplary embodiment, the model part corresponding to the model part can be subjected to model customization processing through the model map specified by the user, so that the subjective initiative of the user in the aspect of model customization is greatly exerted, the customization diversity of the virtual gift model is enriched, and the customization effect of the virtual gift model is ensured.
In an alternative embodiment, fig. 3 shows a flow chart of a method of the second model customization process, which, as shown in fig. 3, at least includes the following steps: in step S310, if the model name is the model color, the original color and the target color corresponding to the model color, and the pixel value of the virtual gift model are obtained.
When the model name is a model color of car _ body or car _ color, an original color of the virtual gift model and a target color of the virtual gift model that the user wants to get may be acquired, and a pixel value of each point on the virtual gift model may also be acquired.
When the user wants to set the target color of the virtual gift model, the target color may be determined by sliding an HSV (Hue, Saturation, brightness) color model, or the target color may be determined by the user in another manner, which is not particularly limited in this exemplary embodiment.
In step S320, a color difference value is calculated for the original color and the target color, and the virtual gift model is updated using the pixel value and the color difference value.
After the original color and the target color are obtained, the color difference value of the color of the originally obtained virtual gift model to be updated and the color of the previous virtual gift model can be subtracted from the target color.
Further, the color difference is added to the pixel value of each point of the virtual gift model to obtain a new pixel value of each point, thereby updating the model part of the virtual gift model, such as the vehicle body.
In the present exemplary embodiment, the color of the virtual gift model may be updated through the original color, the target color and the pixel value of the virtual gift model, so that the virtual gift model presents a gradual color effect, which is different from the current modification manner of modifying the color of the virtual gift model, i.e., presenting the same color without difference in the virtual gift model, and the gradual color of the virtual gift model presents better effect and more richness.
In an alternative embodiment, fig. 4 shows a flow chart of a third method for model customization, which, as shown in fig. 4, at least includes the following steps: in step S410, if the model name is the avatar name, the avatar option corresponding to the avatar name is obtained.
When the model name is the name of the steering wheel mesh, the head portrait option selected by the user can be obtained.
In an alternative embodiment, the avatar options include: anchor avatar, user avatar, and fusion avatar.
The fusion avatar is obtained by fusing the anchor avatar and the user avatar. Specifically, the anchor avatar and the user image may be arranged with 45 ° inclinations in opposite directions, respectively, and the anchor avatar and the user avatar may have a part of the display effect overlapped during the arrangement. Furthermore, a frame or a background wrapping a superposition display effect can be arranged on the anchor head portrait and the user head portrait which are arranged in a superposition mode, so that a fusion head portrait is obtained.
In step S420, a model part corresponding to the avatar name is determined, and the model part is subjected to model customization processing using the avatar option to update the virtual gift model.
When the model name is the name of the steering wheel mesh, the corresponding model part can be determined to be the steering wheel. Therefore, the model customizing process can be performed on the steering wheel by using the avatar option.
Specifically, the avatar option content selected by the user may be posted to the steering wheel to obtain an updated virtual gift model.
In the exemplary embodiment, model customization processing is performed on the model part through the head portrait option, so that the preference of a user is met, the customization diversity of the virtual gift model is enriched, and the use and gift giving experience of the user are optimized.
In an alternative embodiment, fig. 5 shows a flow chart of a fourth method for model customization, which, as shown in fig. 5, at least includes the following steps: in step S510, if the model name is a model document, document information corresponding to the model document is obtained.
When the model name is the name of the license plate mesh of the luxury vehicle gift model, the file information input by the user can be acquired.
The document information may be a license plate number set by the user, or may be other text information, which is not particularly limited in this exemplary embodiment.
In step S520, a model part corresponding to the model document is determined, and the model part is subjected to model customization processing using the document information to update the virtual gift model.
After the model name is determined, the model part corresponding to the name of the model file of the license plate mesh can be determined as the license plate. Therefore, the license plate can be subjected to model customization processing by using the file information input by the user.
Specifically, the copy information input by the user may be filled in the license plate to update the virtual gift model.
In the exemplary embodiment, the model is customized for the corresponding model part through the model file, so that the customization diversity of the virtual gift model is enriched, the customization requirement of the user on the license plate is met, and the user experience is optimized.
Further, the space box can be used for rendering and updating the global illumination of the virtual gift model.
In an alternative embodiment, fig. 6 shows a flow diagram of a method for rendering a virtual gift model using a sky box, as shown in fig. 6, the method comprising at least the steps of: in step S610, a panoramic picture is acquired, and a sky box is generated based on the panoramic picture.
The panoramic picture is also called a three-dimensional real scene, and is a viewpoint image synthesized by fitting and processing based on a real scene photo, so that a user can feel in the body painting environment. And in the case of building the sky box by using the panoramic picture, the richness of global illumination and the immersion sense of a user can be improved.
The panoramic picture may be generated by the designer according to the determined lighting effect, or may be generated in other manners, which is not particularly limited in this exemplary embodiment.
Further, after obtaining the panoramic picture, the panoramic picture may be used to generate a sky box.
When a unity rendering engine is used to generate a sky box corresponding to a panorama picture, an HDR picture of the panorama picture may be imported, and then a Texture Type is set as cube map, and an Apply is clicked.
HDR is an abbreviation of High-Dynamic Range (High Dynamic Range), which is originally a CG (Computer Graphics) concept.
When representing an image, a computer distinguishes the brightness of the image by 8bit (256) or 16bit (65536) levels, but hundreds or tens of thousands of the areas cannot reproduce true natural lighting. An HDR file is a special graphics file format, each pixel of which has, in addition to RGB information, the actual luminance information of the point.
Further, a Material is created, and a shader type is set to sky-cube (sky box model). Moreover, the shader can also be written by the cube reflection in the Unity to realize the custom effect.
Finally, the lighting panel is opened to replace the sky box material, and the path is window-lighting-environment-sky box or other paths, which is not limited in this exemplary embodiment.
In addition, the panorama picture may also be generated into a sky box by another 3D (3-dimensional) rendering engine, which is not particularly limited in the present exemplary embodiment.
In step S620, the updated virtual gift model is rendered using the sky box.
After generating the sky box for the panoramic picture, the updated virtual gift model may be rendered using the sky box.
Specifically, the HDR resource of the ambient light is downloaded according to the address, i.e. path, of the HDR ambient light of the HDR sky box, and the ambient light is rendered, so that the sky box performs ambient reflection as the ambient background and provides rich global illumination for the entire scene.
In the exemplary embodiment, the sky box for rendering the virtual gift model can be obtained through the panoramic picture, the generation mode is simple and convenient, and the HDR ambient light is added to the virtual gift model, so that the customization and the display effect of the virtual gift model are better.
In addition, the sequential frame picture rendering can be carried out on the special effect model of the virtual gift model, so that the particle effect of the virtual gift model is better.
In an alternative embodiment, fig. 7 shows a flowchart of a method for rendering a virtual gift model using sequential frame pictures, as shown in fig. 7, the method at least includes the following steps: in step S710, a model texture pattern of the virtual gift model is acquired, and a sequence frame picture corresponding to the model name is acquired.
The model texture pattern may be a model surface given by the designer, the model surface being a model part, such as a car hull, to be used to render a special effect using the sequence frame pictures.
The sequence frame picture may be a special effect picture of a portion to be rendered for the model, such as a sequence frame picture of wind.
In step S720, a rendering order of the sequence frame pictures is obtained by performing order configuration processing on the sequence frame pictures.
After the sequence frame pictures are obtained, since the arrangement of the sequence frame pictures is out of order, the sequence frame pictures can be subjected to sequential configuration processing.
Specifically, the sequence frame pictures are arranged according to a desired rendering order, so as to obtain the rendering order of the sequence frame pictures. In the process of arranging the sequence frame pictures, the sequence frame pictures of one frame may be shifted to the right or the left, or the sequence frame pictures may be arranged in other manners, which is not particularly limited in this exemplary embodiment.
In step S730, based on the rendering order, the model texture pattern is rendered using the sequence frame pictures to obtain an updated virtual gift model.
After the rendering order is obtained, the sequence frame pictures may be added to the model texture pattern according to the rendering order, so as to implement rendering of the model texture pattern to obtain an updated virtual gift model.
In the exemplary embodiment, the sequence frame pictures are sequentially configured to obtain the corresponding rendering sequence, so that the particle dynamic effects brought by the sequence frame pictures are not influenced with each other, and the rendering effect of the particle dynamic effects is natural and vivid, so as to ensure the rendering effect of the virtual gift model.
After the customized update of the virtual gift model, the updated virtual gift model may also be saved.
In an alternative embodiment, the updated virtual gift model is saved at a specified size.
Since the original gift picture of the virtual gift model is displayed according to a static picture with a specified size when the user opens the giving right of the virtual gift model, the virtual gift model needs to be stored according to the specified size after the virtual gift model is customized and updated. For example, the designated size may be 200 × 200, or may be other sizes, which is not particularly limited in the present exemplary embodiment.
In an alternative embodiment, fig. 8 shows a flow diagram of a method of saving an updated virtual gift model, which, as shown in fig. 8, comprises at least the following steps: in step S810, a model base corresponding to the updated virtual gift model is drawn.
Since the virtual gift model is stored with only one real-time static map, a dynamic model base, such as a circular base, can be drawn below the virtual gift model.
In step S820, the updated virtual gift model is saved in a designated size based on the model base.
After the model base is drawn, the updated virtual gift model may be saved at the specified size.
In the exemplary embodiment, the display effect of the virtual gift model and the appreciation of the user can be increased by drawing the model base, and the updated virtual gift model is stored according to the specified size, so that the abnormal display condition can be avoided, and the display effect of the real-time updated picture of the virtual gift model can be ensured.
Since the virtual gift model is used for the audience to give away to the main broadcast, the gift picture corresponding to the virtual gift model can be updated in the giving process.
In an alternative embodiment, fig. 9 shows a flowchart of a method for updating a gift picture, as shown in fig. 9, the method at least includes the following steps: in step S910, an original gift picture corresponding to the virtual gift model is acquired.
The original gift pictures may include a public screen gift picture and a gift banner picture, and may also include other gift pictures related to the audience gift process, which is not particularly limited in this exemplary embodiment.
In step S920, the original gift picture is updated according to the updated virtual gift model to obtain the target gift picture.
Since the display content of the original gift picture includes the virtual gift model, after the virtual gift model is updated, the updated virtual gift model can be replaced and displayed on the original gift picture to obtain the target gift picture.
In the exemplary embodiment, the original gift picture can be updated through the updated virtual gift model, so that the picture related to the virtual gift model in the live broadcast process is consistent with the update of the virtual gift model, the customized update result of the virtual gift model is reflected in real time, and the gift sending satisfaction of the user is improved.
After the user customizes the virtual gift model, the customized virtual gift model can be presented for the anchor by clicking the "present" control. Further, presentation prompt information corresponding to the presentation determination result may be generated.
In an alternative embodiment, fig. 10 shows a flow diagram of a method of giveaway protocol determination, which, as shown in fig. 10, comprises at least the following steps: in step S1010, the existing resource amount is acquired, and gift-giving processing is performed on the updated virtual gift model to obtain a gift-giving amount.
The existing resource amount may be an account amount currently owned by the user, and the gifted gift amount may be an amount spent gifting the updated virtual gift model.
In step S1020, the gift payment agreement determination is made on the existing resource amount based on the gift payment amount to obtain a gift determination result, and the gift presentation prompt information is generated based on the gift determination result.
After the existing resource amount and the present gift amount are obtained, the present protocol determination can be realized by comparing the present gift amount with the existing resource amount to obtain a present determination result, so that corresponding present prompt information is generated according to the present determination result.
In an alternative embodiment, the gifting prompt message includes gift special effect information and gifting failure information, fig. 11 is a flowchart illustrating a method for generating the gifting prompt message according to the gifting determination result, as shown in fig. 11, the method at least includes the following steps: in step S1110, if the present determination result is that the existing resource amount is greater than or equal to the present gift amount, a special effect display determination is performed on the present gift amount to obtain a special effect determination result, and gift special effect information of the updated virtual gift model is generated according to the special effect determination result.
When the existing resource amount and the present gift amount are subjected to presentation agreement determination, that is, the existing resource amount and the present gift amount are compared, the present determination result can be obtained that the existing resource amount is greater than or equal to the present gift amount, that is, the amount in the account of the user is greater than or equal to the amount of the present virtual gift model, and the cost of presenting the virtual gift model can be borne.
However, whether or not the gift special effect information corresponding to the virtual gift model can be displayed requires further special effect display determination of the gift amount to be presented.
In an alternative embodiment, fig. 12 is a flowchart illustrating a method for determining special effects display, and as shown in fig. 12, the method at least includes the following steps: in step S1210, the special effect resource amount corresponding to the gift special effect information is acquired, and the special effect resource amount is used to perform special effect display determination on the donation resource amount to obtain a special effect determination result.
The amount of the special effect resource is an amount judgment standard for displaying the gift special effect information, and the gift special effect information of the virtual gift model is generated only when the amount of the donation resource of the virtual gift model given by the user is larger than the amount of the special effect resource.
Therefore, the special effect resource amount can be used for carrying out special effect display judgment on the donation resource amount of the virtual gift model, namely the special effect resource amount is compared with the donation resource amount to obtain a corresponding special effect judgment result.
In step S1220, if the gift result of the special effect determination is that the donation resource amount is greater than the special effect resource amount, gift special effect information of the updated virtual gift model is generated.
When the special effect resource amount and the donation resource amount are subjected to special effect display judgment, namely the special effect resource amount and the donation resource amount are compared, the special effect judgment result that the donation resource amount is larger than the special effect resource amount can be obtained, and the donation resource amount spent by the virtual gift model to be donated meets the requirement of displaying gift special effect information, so that gift special effect information of the updated virtual gift model can be generated. The gift special effect information may be information for playing and displaying a gift big special effect.
In the exemplary embodiment, the gift special effect information of the virtual gift model can be generated through special effect display judgment, and the gift special effect information is displayed in a meticulous playing mode, so that the interface display effect when the user presents the virtual gift model is ensured, and the gift-sending satisfaction degree when the user presents the virtual gift model is also met.
After the gift big special effect of the virtual gift model is played according to the gift special effect information, the playing component corresponding to the gift special effect information can be hidden.
In addition, the special effect determination result may be that the donation resource amount is less than or equal to the special effect resource amount, which indicates that the donation amount of the virtual gift model donated by the user cannot meet the standard for generating the gift special effect information, and therefore, the gift special effect information with the virtual gift model cannot be generated. Furthermore, when the user presents the virtual gift model, the corresponding gift big special effect cannot be played and displayed.
In step S1120, if the present determination result is that the existing resource amount is less than the present gift amount, the present failure information corresponding to the updated virtual gift model is generated.
When the existing resource amount and the present gift amount are subjected to presentation agreement determination, that is, the existing resource amount and the present gift amount are compared, the present determination result can be obtained that the existing resource amount is smaller than the present gift amount, that is, the amount in the user account is smaller than the amount of the present virtual gift model, and the cost for presenting the virtual gift model cannot be borne.
Therefore, the user cannot give the virtual gift model, and corresponding giving failure information is generated for prompting the user.
In the exemplary embodiment, gift special effect information or gift failure information corresponding to the virtual gift model can be generated through the gift protocol judgment, the result of the gift protocol judgment is completely judged, logic and presentation are carried out, and the display effect of the gift prompting information under different conditions is guaranteed.
Furthermore, during or after the model customization process is performed by the user to obtain the updated virtual gift model, the video stream corresponding to the virtual gift model can be embedded and displayed.
In an alternative embodiment, fig. 13 shows a flow diagram of a method of embedding a video stream rendering a virtual gift model, as shown in fig. 13, the method comprising at least the steps of: in step S1310, a target model name among the model names is acquired.
In addition to determining the mesh name as live _ tv as the target model name in the model names, the target model name corresponding to the console of the virtual gift model may also be obtained according to different naming manners, which is not particularly limited in this exemplary embodiment.
In step S1320, the video stream corresponding to the virtual gift model is rendered at the model part corresponding to the target model name.
When the target model name is live _ tv, the corresponding model part can be determined to be the center console of the virtual gift model such as a luxury vehicle.
Furthermore, real-time synchronous live broadcast stream frame texture data is rendered into the model part frame by frame, and the embedded video stream of the virtual gift model is obtained.
In the exemplary embodiment, the virtual gift model can be embedded into the current live stream in a mode of rendering the video stream at the model part corresponding to the target model name, so that the embedded live effect is realized, and the virtual gift model is more vivid and lively to update.
The following describes the processing method of the virtual gift in the embodiment of the present disclosure in detail with reference to an application scenario.
Fig. 14 shows a schematic interface diagram showing a first version of the virtual luxury vehicle gift, which is of the highest grade noble vehicle type, as shown in fig. 14, but only limited in outcome during the activity.
Fig. 15 shows a schematic interface diagram showing a second version of the virtual luxury vehicle gift, which is also a nobler model of vehicle, as shown in fig. 15, only limited in yield during the activity.
When the virtual gift model is a luxury vehicle gift model, the model can be any luxury vehicle gift model, and the customization processing mode of the virtual gift model can be applied to all the luxury vehicle gift models without limiting which specific luxury vehicle gift model is.
Fig. 16 is a schematic view illustrating an interface of the live broadcast for displaying the virtual luxury car gift, as shown in fig. 16, after the user has the authority of the virtual luxury car gift, the user can open the original gift picture of the virtual luxury car gift for displaying in the live broadcast. The original gift picture of the virtual gift model is presented as a static picture of a specified size, which may be 200 x 200, for example.
Fig. 17 shows a schematic view of an interface for displaying the virtual luxury car gift to be modified in the live broadcast room, and as shown in fig. 17, the user can enter the modification interface of the virtual luxury car gift by clicking the "go to modify" control under the virtual luxury car gift.
In addition, after the user modifies the virtual luxury car gift, the user can give the virtual luxury car gift by clicking a 'main broadcasting sending' control below the virtual luxury car gift. In addition, the "anchor" control may also have a gift-offering amount, such as 1000000 tickets, marked thereon. The gifting gift amount may be an amount spent for gifting the updated virtual gift model.
Fig. 18 shows a schematic view of an interface for displaying a virtual luxury car gift to be reformed, and as shown in fig. 18, a user can enter the reforming interface of the virtual luxury car gift by clicking a go to reform control under the virtual luxury car gift.
In addition, after the user modifies the virtual luxury vehicle gift, the user can give the virtual luxury vehicle gift by clicking the 'main broadcasting sending' control below the virtual luxury vehicle gift. In addition, the "anchor" control may also have a gift-offering amount, such as 1000000 tickets, marked thereon. The gifting gift amount may be an amount spent for gifting the updated virtual gift model.
Fig. 19 shows another interface schematic diagram showing a virtual luxury car gift to be reformed, and as shown in fig. 19, a user may enter the reforming interface of another virtual luxury car gift by clicking on the "go to reform" control under the other virtual luxury car gift.
In addition, after the user modifies the virtual luxury car gift, the user can give the virtual luxury car gift by clicking a 'main broadcasting sending' control below the virtual luxury car gift. In addition, the "anchor" control may also have a gift-offering amount, such as 1000000 tickets, marked thereon. The gifting gift amount may be an amount spent for gifting the updated virtual gift model.
Fig. 20 shows a schematic view of yet another interface for displaying a virtual luxury car gift to be modified, and as shown in fig. 20, a user may enter into a modification interface for yet another virtual luxury car gift by clicking on a go to modify control under yet another virtual luxury car gift.
In addition, after the user modifies the virtual luxury car gift, the user can give the virtual luxury car gift by clicking a 'main broadcasting sending' control below the virtual luxury car gift. In addition, the "anchor" control may also have a gift-offering amount, such as 1000000 tickets, marked thereon. The gifting gift amount may be an amount spent for gifting the updated virtual gift model.
Fig. 21 is an interface diagram illustrating a method of model customization processing, and as shown in fig. 21, when a user clicks a signature, a model name may be determined as name information of a model document, and further, a model portion corresponding to the model document may be determined.
When the model name is the name of the license plate mesh of the luxury vehicle gift model, the file information input by the user can be acquired.
The document information may be a license plate number set by the user, or may be other text information, which is not particularly limited in this exemplary embodiment.
After the model name is determined, the model part corresponding to the name of the model file of the license plate mesh can be determined as the license plate. Therefore, the license plate can be subjected to model customization processing by using the file information input by the user.
Specifically, the virtual charm model may be updated by filling in the user-entered pattern information, such as 888888, at the license plate.
Fig. 22 is a schematic interface diagram showing another method of customizing a model, and as shown in fig. 22, when the user clicks the paint, the model name may be determined as the name information of the model color, and further, the model portion corresponding to the model color may be determined.
And if the model name is the model color, acquiring the original color and the target color corresponding to the model color and the pixel value of the virtual gift model.
When the model name is a model color of car _ body or car _ color, an original color of the virtual gift model and a target color of the virtual gift model that the user wants to get may be acquired, and a pixel value of each point on the virtual gift model may also be acquired.
When the user wants to set the target color of the virtual gift model, the user may determine the target color by sliding the HSV color model, or the user may determine the target color by other manners, which is not particularly limited in the exemplary embodiment.
And calculating the color difference value of the original color and the target color to obtain the color difference value, and updating the virtual gift model by using the pixel value and the color difference value.
After the original color and the target color are obtained, the color difference value of the color of the originally obtained virtual gift model to be updated and the color of the previous virtual gift model can be subtracted from the target color.
Further, the color difference is added to the pixel value of each point of the virtual gift model to obtain a new pixel value of each point, thereby updating the model part of the virtual gift model, such as the vehicle body.
Fig. 23 is a schematic interface diagram showing another method of model customization processing of a virtual luxury vehicle gift, in which, as shown in fig. 23, when a user clicks a sticker, name information indicating that a model name is a sticker name can be determined, and further, a model portion corresponding to the sticker name can be determined.
And if the model name is the map name, acquiring the model map corresponding to the map name.
When the model name is a map name of car _ door, the mesh map selected by the user, that is, the model map corresponding to the map name, can be obtained. For example, it may be lightning.
And determining a model part corresponding to the map name, and performing model customization processing on the model part by using the model map to update the virtual gift model.
After the model name is determined, the model part corresponding to the model name of car _ door can be determined as the door, and therefore, the door can be subjected to model customization processing using mesh maps of lightning.
Specifically, a user-specified lightning mesh map may be attached to the vehicle door to update the luxury vehicle gift model.
Fig. 24 is an interface diagram showing another method of customizing a model of another virtual luxury car gift, and as shown in fig. 24, when the user clicks a sticker, name information in which a model name is a sticker name may be determined, and further, a model portion corresponding to the sticker name may be determined.
And if the model name is the map name, acquiring the model map corresponding to the map name.
When the model name is a map name of car _ door, the mesh map selected by the user, that is, the model map corresponding to the map name, can be obtained. For example, it may be a flame.
And determining a model part corresponding to the map name, and performing model customization processing on the model part by using the model map to update the virtual gift model.
After the model name is determined, the model part corresponding to the model name of car _ door can be determined as the vehicle door, so that the vehicle door can be subjected to model customization processing by using a mesh map of flame.
Specifically, a mesh map of the flame specified by the user can be attached to the vehicle door to update the luxury vehicle gift model.
Fig. 25 is a schematic diagram illustrating an interface of successful modification of the virtual gift, and as shown in fig. 25, after the customized update of the virtual gift model, the updated virtual gift model may be saved.
Since the original gift picture of the virtual gift model is displayed according to a static picture with a specified size when the user opens the giving right of the virtual gift model, the virtual gift model needs to be stored according to the specified size after the virtual gift model is customized and updated. For example, the designated size may be 200 × 200, or may be other sizes, which is not particularly limited in the present exemplary embodiment.
Since the virtual gift model is stored with only one real-time static map, a dynamic model base, such as a circular base, can be drawn below the virtual gift model.
After the model base is drawn, the updated virtual gift model may be saved at the specified size.
In addition, the space box can be used for rendering and updating the global illumination of the virtual gift model.
First, a panoramic picture is acquired, and a sky box is generated based on the panoramic picture.
The panoramic picture may be generated by the designer according to the determined lighting effect, or may be generated in other manners, which is not particularly limited in this exemplary embodiment.
Then, after the panoramic picture is obtained, a sky box may be generated using the panoramic picture.
When a unity rendering engine is used to generate a sky box corresponding to a panoramic picture, the HDR picture of the panoramic picture may be imported, and then the Texture Type is set to cube, and apple is clicked.
And further, creating a Material, and setting the type of the shader to be the sky-cube map. Moreover, the shader can also be written by the cube reflection in the Unity to realize the custom effect.
Finally, the lighting panel is opened to replace the sky box material, and the path is window-lighting-environment-sky box or other paths, which is not limited in this exemplary embodiment.
After generating the sky box for the panoramic picture, the updated virtual gift model may be rendered using the sky box. Specifically, the HDR resource of the ambient light is downloaded according to the address, i.e. path, of the HDR ambient light of the HDR sky box, and the ambient light is rendered, so that the sky box performs ambient reflection as the ambient background and provides rich global illumination for the entire scene.
In the processing method of the virtual gifts in the application scene, on one hand, the problems of labor cost and development cost waste caused by the fact that corresponding program codes need to be developed for each virtual gift are solved through the unified division processing of the virtual gift models, the one-to-many compatible effect of virtual gift model processing is achieved, and the development efficiency and the iteration efficiency of the virtual gift model processing mode are improved; on the other hand, model customization processing is carried out on the model part according to the model name, the defect that the aspects of the virtual gift model, special effects, display and the like are too single and simple is overcome, and the customization effect and function display of the virtual gift model are optimized. Furthermore, the virtual gift model can be applied to live broadcast or other related scenes, application scenes customized by the virtual gift model are enriched, the requirement for effectively and comfortably binding the virtual gift model and the related scenes is met, the richness and interest of the virtual gift model are improved, and the presenting experience and the using experience of a user are optimized to a certain degree.
Further, in an exemplary embodiment of the present disclosure, a processing apparatus of a virtual gift is also provided. Fig. 26 is a schematic diagram illustrating a configuration of a virtual gift processing apparatus, and as shown in fig. 26, the virtual gift processing apparatus 2600 may include: a model partitioning module 2610 and a model customization module 2620. Wherein:
the model partitioning module 2610 is configured to obtain a virtual gift model and conduct unified partitioning processing on the virtual gift model to obtain a model name and a model part corresponding to the model name; the model customizing module 2620 is configured to update the virtual gift model by performing model customizing processing on the model part according to the model name.
In an exemplary embodiment of the present invention, the model name includes: the name of the map, the color of the model, the name of the head portrait and the pattern.
In an exemplary embodiment of the present invention, the updating the virtual gift model by the model customization process of the model part according to the model name includes:
if the model name is the map name, obtaining a model map corresponding to the map name;
and determining the model part corresponding to the map name, and performing model customization processing on the model part by using the model map to update the virtual gift model.
In an exemplary embodiment of the present invention, the updating the virtual gift model by the model customization process of the model part according to the model name includes:
if the model name is the model color, acquiring an original color and a target color corresponding to the model color and a pixel value of the virtual gift model;
and calculating the color difference value of the original color and the target color to obtain the color difference value, and updating the virtual gift model by using the pixel value and the color difference value.
In an exemplary embodiment of the present invention, the updating the virtual gift model by the model customization process of the model part according to the model name includes:
if the model name is the avatar name, acquiring an avatar option corresponding to the avatar name;
and determining the model part corresponding to the avatar name, and performing model customization processing on the model part by using the avatar option to update the virtual gift model.
In an exemplary embodiment of the invention, the avatar option includes: anchor avatar, user avatar, and fusion avatar.
In an exemplary embodiment of the present invention, the updating the virtual gift model by the model customization process of the model part according to the model name includes:
if the model name is the model file, acquiring file information corresponding to the model file;
and determining the model part corresponding to the model file, and performing model customization processing on the model part by using the file information to update the virtual gift model.
In an exemplary embodiment of the invention, the method further comprises:
acquiring a panoramic picture, and generating a sky box based on the panoramic picture;
rendering the updated virtual gift model with the sky box.
In an exemplary embodiment of the invention, the method further comprises:
obtaining a model texture pattern of the virtual gift model, and obtaining a sequence frame picture corresponding to the model name;
sequentially configuring the sequence frame pictures to obtain a rendering sequence of the sequence frame pictures;
and rendering the model texture pattern by using the sequence frame pictures based on the rendering sequence to obtain the updated virtual gift model.
In an exemplary embodiment of the invention, the method further comprises:
and saving the updated virtual gift model according to a specified size.
In an exemplary embodiment of the present invention, the saving the updated virtual gift model in a designated size includes:
drawing a model base corresponding to the updated virtual gift model;
and storing the updated virtual gift model according to a specified size based on the model base.
In an exemplary embodiment of the invention, the method further comprises:
acquiring an original gift picture corresponding to the virtual gift model;
and updating the original gift picture according to the updated virtual gift model to obtain a target gift picture.
In an exemplary embodiment of the invention, the method further comprises:
obtaining the amount of the existing resources, and carrying out gift giving processing on the updated virtual gift model to obtain the amount of the gift to be given;
and carrying out presentation agreement judgment on the existing resource amount according to the presentation gift amount to obtain a presentation judgment result, and generating presentation prompt information according to the presentation judgment result.
In an exemplary embodiment of the present invention, the presentation hint information includes gift special effect information and presentation failure information,
the generating presentation prompt information according to the presentation determination result includes:
if the present judgment result is that the existing resource amount is larger than or equal to the present gift amount, performing special effect display judgment on the present gift amount to obtain a special effect judgment result, and generating updated gift special effect information of the virtual gift model according to the special effect judgment result;
and if the present judgment result is that the existing resource amount is smaller than the present gift amount, generating present failure information corresponding to the updated virtual gift model.
In an exemplary embodiment of the present invention, the determining the special effect display of the gifted gift amount to obtain a special effect determination result, and generating the updated gift special effect information of the virtual gift model according to the special effect determination result includes:
obtaining special effect resource amount corresponding to the gift special effect information, and performing special effect display judgment on the donation resource amount by using the special effect resource amount to obtain a special effect judgment result;
and if the special effect judgment result is that the donation resource amount is larger than the special effect resource amount, generating the updated gift special effect information of the virtual gift model.
In an exemplary embodiment of the invention, the method further comprises:
obtaining a target model name in the model names;
rendering a video stream of the virtual gift model at a model site corresponding to the target model name.
The details of the virtual gift processing device 2600 are already described in detail in the corresponding virtual gift processing method, and therefore are not described herein again.
It should be noted that although several modules or units of the processing device 2600 of the virtual gift are mentioned in the above detailed description, such division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
An electronic apparatus 2700 according to such an embodiment of the present invention is described below with reference to fig. 27. Fig. 27 shows an electronic device 2700 as an example and should not be construed as limiting the scope of use or functionality of embodiments of the present invention in any way.
As shown in fig. 27, an electronic device 2700 is in the form of a general purpose computing device. Components of electronic device 2700 may include, but are not limited to: the at least one processing unit 2710, the at least one memory unit 2720, a bus 2730 connecting different system components (including the memory unit 2720 and the processing unit 2710), and a display unit 2740.
Wherein the memory unit stores program code that is executable by the processing unit 2710 such that the processing unit 2710 performs the steps according to various exemplary embodiments of the present invention described in the "exemplary methods" section above in this specification.
The storage 2720 may include readable media in the form of volatile storage such as a random access storage unit (RAM)2721 and/or a cache storage unit 2722, and may further include a read only storage unit (ROM) 2723.
The storage unit 2720 may also include a program/utility 2724 having a set (at least one) of program modules 2725, such program modules 2725 including but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 2730 may be a local bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
Electronic device 2700 can also communicate with one or more external devices 2900 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with electronic device 2700, and/or with any devices (e.g., router, modem, etc.) that enable electronic device 2700 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 2750. Moreover, electronic device 2700 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 2760. As shown, network adapter 2740 communicates with the other modules of electronic device 2700 over bus 2730. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in connection with electronic device 2700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above-mentioned "exemplary methods" section of the present description, when said program product is run on the terminal device.
Referring to fig. 28, a program product 2800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (19)

1. A method of processing a virtual gift, the method comprising:
obtaining a virtual gift model, and carrying out unified division processing on the virtual gift model to obtain a model name and a model part corresponding to the model name;
and carrying out model customization processing on the model part according to the model name to update the virtual gift model.
2. The method of processing a virtual gift as recited in claim 1, wherein the model name comprises: the name of the map, the color of the model, the name of the head portrait and the pattern.
3. The method of processing a virtual gift as claimed in claim 2, wherein said updating the virtual gift model by performing the model customization process on the model part according to the model name comprises:
if the model name is the map name, obtaining a model map corresponding to the map name;
and determining the model part corresponding to the map name, and performing model customization processing on the model part by using the model map to update the virtual gift model.
4. The method of processing a virtual gift as claimed in claim 2, wherein said updating the virtual gift model by performing the model customization process on the model part according to the model name comprises:
if the model name is the model color, acquiring an original color and a target color corresponding to the model color and a pixel value of the virtual gift model;
and calculating the color difference value of the original color and the target color to obtain the color difference value, and updating the virtual gift model by using the pixel value and the color difference value.
5. The method of processing a virtual gift as claimed in claim 2, wherein said updating the virtual gift model by performing the model customization process on the model part according to the model name comprises:
if the model name is the avatar name, acquiring an avatar option corresponding to the avatar name;
and determining the model part corresponding to the avatar name, and performing model customization processing on the model part by using the avatar option to update the virtual gift model.
6. The method of processing a virtual gift as recited in claim 5, wherein the avatar option comprises: anchor avatar, user avatar, and fusion avatar.
7. The method of processing a virtual gift as claimed in claim 2, wherein said updating the virtual gift model by performing the model customization process on the model part according to the model name comprises:
if the model name is the model file, acquiring file information corresponding to the model file;
and determining the model part corresponding to the model file, and performing model customization processing on the model part by using the file information to update the virtual gift model.
8. A method of processing a virtual gift as recited in claim 1, further comprising:
acquiring a panoramic picture, and generating a sky box based on the panoramic picture;
rendering the updated virtual gift model with the sky box.
9. A method of processing a virtual gift as recited in claim 1, further comprising:
obtaining a model texture pattern of the virtual gift model, and obtaining a sequence frame picture corresponding to the model name;
sequentially configuring the sequence frame pictures to obtain a rendering sequence of the sequence frame pictures;
and rendering the model texture pattern by using the sequence frame pictures based on the rendering sequence to obtain the updated virtual gift model.
10. A method of processing a virtual gift as recited in claim 1, further comprising:
and saving the updated virtual gift model according to a specified size.
11. A method of processing a virtual gift as defined in claim 10, wherein said storing the updated virtual gift model at a specified size comprises:
drawing a model base corresponding to the updated virtual gift model;
and storing the updated virtual gift model according to a specified size based on the model base.
12. A method of processing a virtual gift as recited in claim 1, further comprising:
acquiring an original gift picture corresponding to the virtual gift model;
and updating the original gift picture according to the updated virtual gift model to obtain a target gift picture.
13. A method of processing a virtual gift as recited in claim 1, further comprising:
obtaining the amount of the existing resources, and carrying out gift giving processing on the updated virtual gift model to obtain the amount of the gift to be given;
and carrying out presentation agreement judgment on the existing resource amount according to the presentation gift amount to obtain a presentation judgment result, and generating presentation prompt information according to the presentation judgment result.
14. The method of processing a virtual gift according to claim 13, wherein the gifting promotion information includes gift special effect information and gifting failure information,
the generating presentation prompt information according to the presentation determination result includes:
if the present judgment result is that the existing resource amount is larger than or equal to the present gift amount, performing special effect display judgment on the present gift amount to obtain a special effect judgment result, and generating updated gift special effect information of the virtual gift model according to the special effect judgment result;
and if the present judgment result is that the existing resource amount is smaller than the present gift amount, generating present failure information corresponding to the updated virtual gift model.
15. The method of claim 14, wherein the determining the special effect display of the gift money amount to obtain a special effect determination result and generating the gift special effect information of the virtual gift model after updating according to the special effect determination result comprises:
obtaining special effect resource amount corresponding to the gift special effect information, and performing special effect display judgment on the donation resource amount by using the special effect resource amount to obtain a special effect judgment result;
and if the special effect judgment result is that the donation resource amount is larger than the special effect resource amount, generating the updated gift special effect information of the virtual gift model.
16. A method of processing a virtual gift as recited in claim 1, further comprising:
obtaining a target model name in the model names;
rendering a video stream of the virtual gift model at a model site corresponding to the target model name.
17. A device for processing a virtual gift, comprising:
the model dividing module is configured to obtain a virtual gift model, and uniformly divide the virtual gift model to obtain a model name and a model part corresponding to the model name;
and the model customizing module is configured to perform model customizing processing on the model part according to the model name to update the virtual gift model.
18. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of processing a virtual gift recited in any one of claims 1-16.
19. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of processing a virtual gift of any one of claims 1-16 via execution of the executable instructions.
CN202110959883.9A 2021-08-20 2021-08-20 Virtual gift processing method and device, storage medium and electronic equipment Active CN113473211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110959883.9A CN113473211B (en) 2021-08-20 2021-08-20 Virtual gift processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110959883.9A CN113473211B (en) 2021-08-20 2021-08-20 Virtual gift processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113473211A true CN113473211A (en) 2021-10-01
CN113473211B CN113473211B (en) 2023-07-14

Family

ID=77866893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110959883.9A Active CN113473211B (en) 2021-08-20 2021-08-20 Virtual gift processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113473211B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111881A (en) * 2017-12-15 2018-06-01 广州虎牙信息科技有限公司 Direct broadcasting room virtual item collocation method, device and corresponding server
US20190355050A1 (en) * 2018-05-18 2019-11-21 Gift Card Impressions, LLC Augmented reality gifting on a mobile device
CN110493630A (en) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 The treating method and apparatus of virtual present special efficacy, live broadcast system
CN111643898A (en) * 2020-05-22 2020-09-11 腾讯科技(深圳)有限公司 Virtual scene construction method, device, terminal and storage medium
CN113395533A (en) * 2021-05-24 2021-09-14 广州博冠信息科技有限公司 Virtual gift special effect display method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111881A (en) * 2017-12-15 2018-06-01 广州虎牙信息科技有限公司 Direct broadcasting room virtual item collocation method, device and corresponding server
US20190355050A1 (en) * 2018-05-18 2019-11-21 Gift Card Impressions, LLC Augmented reality gifting on a mobile device
CN110493630A (en) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 The treating method and apparatus of virtual present special efficacy, live broadcast system
CN111643898A (en) * 2020-05-22 2020-09-11 腾讯科技(深圳)有限公司 Virtual scene construction method, device, terminal and storage medium
CN113395533A (en) * 2021-05-24 2021-09-14 广州博冠信息科技有限公司 Virtual gift special effect display method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113473211B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN107832108B (en) Rendering method and device of 3D canvas webpage elements and electronic equipment
CN113891113B (en) Video clip synthesis method and electronic equipment
US11721081B2 (en) Virtual reality experience scriptwriting
JP2014089751A (en) Displaying graphical objects
CN109711877A (en) Processing method, device, computer storage medium and the electronic equipment of advertising pictures
CN112802192B (en) Three-dimensional graphic image player capable of realizing real-time interaction
CN113436343A (en) Picture generation method and device for virtual studio, medium and electronic equipment
US20090144137A1 (en) Automated content production for largely continuous transmission
CN110796712A (en) Material processing method, device, electronic equipment and storage medium
CN110047119B (en) Animation generation method and device comprising dynamic background and electronic equipment
WO2023159595A1 (en) Method and device for constructing and configuring three-dimensional space scene model, and computer program product
CN112218132B (en) Panoramic video image display method and display equipment
CN116843816B (en) Three-dimensional graphic rendering display method and device for product display
CN117390322A (en) Virtual space construction method and device, electronic equipment and nonvolatile storage medium
CN113473211B (en) Virtual gift processing method and device, storage medium and electronic equipment
WO2023025233A1 (en) Method and apparatus for writing animation playback program package, electronic device, and storage medium
CN115049776A (en) Video rendering method and device, storage medium and electronic equipment
CN112367399B (en) Filter effect generation method and device, electronic device and storage medium
US20170221504A1 (en) Photorealistic CGI Generated Character
Martín-Sacristán et al. 5G visualization: the METIS-II project approach
CN116781993A (en) Video animation generation and playing method and device, electronic equipment and storage medium
CN109189512B (en) Data graphical editing interface method
Sun et al. Research on Virtual Display of Characters in “Records of the Grand Historian” Based on Mobile AR Technology
CN117745984A (en) AR picture generation and display method and device, electronic equipment and storage medium
CN115147533A (en) Method and device for producing shadow information, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant