CN116366909B - Virtual article processing method and device, electronic equipment and storage medium - Google Patents

Virtual article processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116366909B
CN116366909B CN202310638533.1A CN202310638533A CN116366909B CN 116366909 B CN116366909 B CN 116366909B CN 202310638533 A CN202310638533 A CN 202310638533A CN 116366909 B CN116366909 B CN 116366909B
Authority
CN
China
Prior art keywords
special effect
target
virtual article
article
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310638533.1A
Other languages
Chinese (zh)
Other versions
CN116366909A (en
Inventor
汤晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310638533.1A priority Critical patent/CN116366909B/en
Publication of CN116366909A publication Critical patent/CN116366909A/en
Application granted granted Critical
Publication of CN116366909B publication Critical patent/CN116366909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure relates to a virtual article processing method, a device, an electronic device and a storage medium, wherein the method comprises the following steps: under the condition that a first direct broadcasting object in a target direct broadcasting room is triggered and a virtual article creation instruction is customized, semantic recognition is carried out on text information to be processed, and at least one special effect keyword and at least one material keyword are obtained; acquiring target special effect materials corresponding to at least one material keyword and target special effect templates matched with the at least one special effect keyword, wherein the target special effect templates are generated based on a special effect template generation model; determining virtual article configuration information corresponding to the custom virtual article; and deploying the custom virtual object to a second live object in the target live room based on the virtual object configuration information, the target special effect template and the target special effect material. By utilizing the embodiment of the invention, individuation and diversity of the virtual article can be improved.

Description

Virtual article processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to a virtual article processing method, a virtual article processing device, electronic equipment and a storage medium.
Background
With the development of internet technology and the increase of popularity of intelligent terminals, more and more users perform various online entertainment through intelligent devices such as mobile phones, for example, watch live webcasts through mobile phones, etc. In the live network watching process, the user can interact with a favorite anchor in a mode of giving virtual articles. In the related art, virtual articles in a live broadcasting room are often uniformly arranged on a platform, so that the personalized expression requirements of the diversity of users cannot be met, the interaction enthusiasm of the users based on the virtual articles in the live broadcasting process is influenced, the live broadcasting interactivity is poor, the users frequently jump among the live broadcasting rooms, a large number of invalid stream pulling operations are brought, and further, the problems of system resource waste, system performance reduction and the like are also brought.
Disclosure of Invention
The disclosure provides a virtual article processing method, a device, electronic equipment and a storage medium, which at least solve the technical problems of interaction enthusiasm of users based on virtual articles, poor live interaction, system resource waste, system performance reduction and the like in a live broadcast process in the related technology. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a virtual article processing method, including:
Under the condition that a first direct broadcasting object in a target direct broadcasting room triggers a user-defined virtual article creation instruction carrying text information to be processed, carrying out semantic recognition on the text information to be processed to obtain at least one special effect keyword and at least one material keyword; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated;
acquiring a target special effect material corresponding to the at least one material keyword and a target special effect template matched with the at least one special effect keyword, wherein the target special effect template is generated based on a special effect template generation model;
determining virtual article configuration information corresponding to the customized virtual article;
and deploying the custom virtual object to a second live object in the target live room based on the virtual object configuration information, the target special effect template and the target special effect material.
In an optional embodiment, the deploying the custom virtual item to the second live object in the target live room based on the virtual item configuration information, the target special effects template, and the target special effects material includes:
creating article identification information corresponding to the customized virtual article according to the virtual article configuration information, the target special effect template and the target special effect material;
Issuing an article deployment instruction carrying the article identification information to the second live broadcast object; the article deployment instruction is used for indicating to acquire the virtual article configuration information based on the article identification information and rendering the article information corresponding to the customized virtual article on the live broadcast virtual article panel corresponding to the second live broadcast object based on the virtual article configuration information.
In an optional embodiment, the issuing, to the second live object, an item deployment instruction carrying the item identification information includes:
issuing the article deployment instruction carrying the article identification information and the target aging information to the second live broadcast object;
the target aging information is used for indicating the usage aging of the customized virtual gift.
In an alternative embodiment, the method further comprises:
under the condition that the second live broadcast object is triggered based on the article information and carries a virtual article interaction instruction of the article identification information, acquiring the target special effect template and the target special effect material based on the article identification information;
issuing an article interaction instruction carrying the target special effect template and the target special effect material to a live object in the target live broadcasting room, wherein the article interaction instruction instructs rendering of the custom special effect in the target live broadcasting room based on the target special effect template and the target special effect material so as to distribute the custom virtual article to a live broadcasting initiating object in the target live broadcasting room.
In an alternative embodiment, the special effects template generation model includes a base template recognition network and a template generation network; the method further comprises the steps of:
inputting the at least one special effect keyword into the basic template recognition network to perform basic template recognition processing, and determining a preset basic special effect template matched with the at least one special effect keyword;
inputting the preset basic special effect template into the template generation network to perform template generation processing to obtain the target special effect template.
In an optional embodiment, the performing semantic recognition on the text information to be processed to obtain at least one special effect keyword and at least one material keyword includes:
inputting the text information to be processed into a preset semantic recognition model for keyword class recognition to obtain the at least one special effect keyword and the at least one material keyword.
In an optional embodiment, the text information to be processed is associated with target interaction information, and interaction index data corresponding to the target interaction information is greater than a preset threshold; and the interaction index data represents the interaction heat of the target interaction information on the live broadcast platform.
In an alternative embodiment, the method further comprises:
and sending creation prompt information of the customized virtual article to the first direct broadcast object, wherein the creation prompt information carries the target interaction information, and the creation prompt information is used for prompting the creation of the customized virtual article associated with the target interaction information.
In an optional embodiment, after obtaining the target special effects material corresponding to the at least one material keyword and the target special effects template matched with the at least one special effects keyword, the method further includes:
generating a custom special effect corresponding to the custom virtual article based on the target special effect template and the target special effect material;
the determining the virtual article configuration information corresponding to the customized virtual article comprises the following steps:
and under the condition that a special effect confirmation instruction aiming at the custom special effect is received, acquiring the virtual article configuration information.
In an optional embodiment, the generating the custom special effect corresponding to the custom virtual object based on the target special effect template and the target special effect material includes:
issuing a special effect preview instruction carrying the target special effect template and the target special effect material to the first direct broadcasting object;
The special effect preview instruction is used for indicating to render the custom special effect on the special effect preview interface corresponding to the first direct broadcast object based on the target special effect template and the target special effect material.
In an optional embodiment, the virtual article configuration information is generated according to the text information to be processed or a custom special effect corresponding to the custom virtual article; the virtual article configuration information comprises an icon of the customized virtual article, a virtual resource amount corresponding to the customized virtual article and an article name of the customized virtual article.
According to a second aspect of embodiments of the present disclosure, there is provided another virtual article processing method, including:
responding to a user-defined virtual article creation instruction triggered by a first direct-broadcasting object in a target direct-broadcasting room, wherein the user-defined virtual article creation instruction carries text information to be processed, and acquiring a target special effect template matched with at least one special effect keyword in the text information to be processed and a target special effect material corresponding to at least one material keyword in the text information to be processed; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated; the at least one special effect keyword and the at least one material keyword are obtained by carrying out semantic recognition on the text information to be processed; the target special effect template is generated based on a special effect template generation model;
Determining virtual article configuration information corresponding to the customized virtual article;
and sending a special effect confirmation instruction to a server, wherein the special effect confirmation instruction is used for indicating that the custom virtual object is deployed to a second live object in the target live broadcasting room based on the target special effect template, the target special effect material and the virtual object configuration information.
In an optional embodiment, in a case where the first live object is any live browsing object in the target live room, and the second live object includes the any live browsing object, the method further includes:
receiving an article deployment instruction issued by the server; the article deployment instruction carries article identification information corresponding to the customized virtual article; the article identification information is created based on the virtual article configuration information, the target special effect template and the target special effect material;
acquiring the virtual article configuration information based on the article identification information;
and rendering the article information corresponding to the customized virtual article on the live broadcast virtual article panel corresponding to any live broadcast browsing object based on the virtual article configuration information.
In an alternative embodiment, the item deployment instruction also carries target aging information;
the target aging information is used for indicating the usage aging of the customized virtual gift.
In an alternative embodiment, the method further comprises:
receiving an item interaction instruction sent by the server under the condition that the second live object triggers a virtual item interaction instruction based on item information corresponding to the customized virtual item, wherein the item interaction instruction carries the target special effect template and the target special effect material;
and rendering the custom special effect on a live broadcast page of the target live broadcast room based on the target special effect template and the target special effect material so as to distribute the custom virtual object to a live broadcast initiating object in the target live broadcast room.
In an optional embodiment, the virtual article configuration information is generated according to the text information to be processed or a custom special effect corresponding to the custom virtual article; the virtual article configuration information comprises an icon of the customized virtual article, a virtual resource amount corresponding to the customized virtual article and an article name of the customized virtual article.
According to a third aspect of embodiments of the present disclosure, there is provided a virtual article processing apparatus, comprising:
the semantic recognition module is configured to execute a first direct broadcast object trigger in a target direct broadcast room, and performs semantic recognition on the text information to be processed under the condition of carrying a user-defined virtual article creation instruction of the text information to be processed to obtain at least one special effect keyword and at least one material keyword; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated;
the special effect data acquisition module is configured to acquire a target special effect material corresponding to the at least one material keyword and a target special effect template matched with the at least one special effect keyword, and the target special effect template is generated based on a special effect template generation model;
the first virtual article configuration information determining module is configured to determine virtual article configuration information corresponding to the customized virtual article;
and the custom virtual article deployment module is configured to perform the deployment of the custom virtual article to a second live object in the target live room based on the virtual article configuration information, the target special effect template and the target special effect material.
In an alternative embodiment, the custom virtual article deployment module includes:
the article identification information creation unit is configured to execute the creation of article identification information corresponding to the customized virtual article according to the virtual article configuration information, the target special effect template and the target special effect material;
the article deployment instruction issuing unit is configured to execute the article deployment instruction carrying the article identification information to the second live broadcast object; the article deployment instruction is used for indicating to acquire the virtual article configuration information based on the article identification information and rendering the article information corresponding to the customized virtual article on the live broadcast virtual article panel corresponding to the second live broadcast object based on the virtual article configuration information.
In an optional embodiment, the item deployment instruction issuing unit is specifically configured to execute issuing, to the second live object, the item deployment instruction carrying the item identification information and the target aging information;
the target aging information is used for indicating the usage aging of the customized virtual gift.
In an alternative embodiment, the apparatus further comprises:
The data acquisition module is configured to acquire the target special effect template and the target special effect material based on the article identification information under the condition that the second live object is triggered based on the article information and carries a virtual article interaction instruction of the article identification information;
and the article interaction instruction issuing module is configured to execute the live broadcast object in the target live broadcast room and issue an article interaction instruction carrying the target special effect template and the target special effect material, wherein the article interaction instruction instructs the custom special effect to be rendered in the target live broadcast room based on the target special effect template and the target special effect material so as to issue the custom virtual article to the live broadcast initiating object in the target live broadcast room.
In an alternative embodiment, the special effects template generation model includes a base template recognition network and a template generation network; the target special effect template generation module comprises:
a preset basic special effect template determining unit configured to perform basic template recognition processing by inputting the at least one special effect keyword into the basic template recognition network, and determine a preset basic special effect template matched with the at least one special effect keyword;
And the template generation processing unit is configured to input the preset basic special effect template into the template generation network to perform template generation processing so as to obtain the target special effect template.
In an optional embodiment, the semantic recognition module is specifically configured to perform keyword class recognition by inputting the text information to be processed into a preset semantic recognition model, so as to obtain the at least one special effect keyword and the at least one material keyword.
In an optional embodiment, the text information to be processed is associated with target interaction information, and interaction index data corresponding to the target interaction information is greater than a preset threshold; and the interaction index data represents the interaction heat of the target interaction information on the live broadcast platform.
In an alternative embodiment, the apparatus further comprises:
the creation prompt information sending module is configured to send creation prompt information of the custom virtual article to the first direct broadcast object, the creation prompt information carries the target interaction information, and the creation prompt information is used for prompting creation of the custom virtual article associated with the target interaction information.
In an alternative embodiment, the apparatus further comprises:
the custom special effect generation module is configured to execute the generation of the custom special effect corresponding to the custom virtual object based on the target special effect template and the target special effect material after the target special effect material corresponding to the at least one material keyword and the target special effect template matched with the at least one special effect keyword are acquired;
the first virtual article configuration information determining module is specifically configured to execute obtaining the virtual article configuration information when receiving a special effect confirmation instruction for the custom special effect.
In an alternative embodiment, the custom special effect generation module includes:
the special effect preview instruction issuing unit is configured to execute the special effect preview instruction which carries the target special effect template and the target special effect material to the first direct broadcast object;
the special effect preview instruction is used for indicating to render the custom special effect on the special effect preview interface corresponding to the first direct broadcast object based on the target special effect template and the target special effect material.
In an optional embodiment, the virtual article configuration information is generated according to the text information to be processed or a custom special effect corresponding to the custom virtual article; the virtual article configuration information comprises an icon of the customized virtual article, a virtual resource amount corresponding to the customized virtual article and an article name of the customized virtual article.
According to a fourth aspect of embodiments of the present disclosure, there is provided another virtual article processing apparatus, comprising:
the special effect information acquisition module is configured to execute a user-defined virtual article creation instruction which is triggered by a first direct broadcast object in a target direct broadcast room and carries text information to be processed, and acquire a target special effect template matched with at least one special effect keyword in the text information to be processed and target special effect materials corresponding to at least one material keyword in the text information to be processed; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated; the at least one special effect keyword and the at least one material keyword are obtained by carrying out semantic recognition on the text information to be processed; the target special effect template is generated based on a special effect template generation model;
the second virtual article configuration information determining module is configured to determine virtual article configuration information corresponding to the customized virtual article;
and the special effect confirmation instruction sending module is configured to send the special effect confirmation instruction to a server, wherein the special effect confirmation instruction is used for indicating that the custom virtual object is deployed to a second live object in the target live room based on the target special effect template, the target special effect material and the virtual object configuration information.
In an optional embodiment, in a case where the first live object is any live browsing object in the target live room, and the second live object includes the any live browsing object, the apparatus further includes:
the article deployment instruction receiving module is configured to execute and receive an article deployment instruction issued by the server; the article deployment instruction carries article identification information corresponding to the customized virtual article; the article identification information is created based on the virtual article configuration information, the target special effect template and the target special effect material;
a virtual article configuration information acquisition module configured to perform acquisition of the virtual article configuration information based on the article identification information;
and the article information rendering module is configured to execute the rendering of the article information corresponding to the customized virtual article on the live virtual article panel corresponding to any live browsing object based on the virtual article configuration information.
In an alternative embodiment, the item deployment instruction also carries target aging information;
the target aging information is used for indicating the usage aging of the customized virtual gift.
In an alternative embodiment, the apparatus further comprises:
the article interaction instruction receiving module is configured to receive an article interaction instruction sent by the server under the condition that the second live broadcast object triggers a virtual article interaction instruction based on article information corresponding to the customized virtual article, wherein the article interaction instruction carries the target special effect template and the target special effect material;
the first custom effect rendering module is configured to perform rendering of the custom effect on a live page of the target live room based on the target effect template and the target effect material to distribute the custom virtual object to live broadcast initiating objects in the target live room.
In an alternative embodiment, the apparatus further comprises:
the second custom effect rendering module is configured to execute the custom effect corresponding to the custom virtual object on the basis of the target effect template and the target effect material after the target effect template matched with at least one effect keyword in the text information to be processed and the target effect material corresponding to at least one material keyword in the text information to be processed are acquired;
The special effect confirmation instruction transmitting module is specifically configured to execute transmitting the special effect confirmation instruction to a server when the special effect confirmation instruction for the custom special effect is detected.
In an optional embodiment, the virtual article configuration information is generated according to the text information to be processed or a custom special effect corresponding to the custom virtual article; the virtual article configuration information comprises an icon of the customized virtual article, a virtual resource amount corresponding to the customized virtual article and an article name of the customized virtual article.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of any one of the first or second aspects above.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of any one of the above-described first or second aspects of embodiments of the present disclosure.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of the first or second aspects described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
under the condition that a first direct broadcasting object in a target direct broadcasting room triggers a custom virtual article creation instruction, carrying text information to be processed in the custom virtual article creation instruction, wherein the text information to be processed is special effect description information corresponding to the custom virtual article to be generated; through semantic recognition of the text information to be processed, special effect keywords and material keywords can be partitioned by combining text expression of a user, and special effect patterns and special effect materials corresponding to the custom virtual articles are accurately recognized; then, acquiring corresponding target special effect materials by combining at least one material keyword, and acquiring a target special effect template corresponding to the special effect keyword generated based on a special effect template generation model, so that automatic generation of code scripts required by special effect rendering can be realized, and the diversity expression requirement of users is met; next, determining virtual article configuration information corresponding to the customized virtual article; the user-defined virtual articles are deployed to the second live object in the target live broadcasting room by combining the virtual article configuration information, the target special effect template and the target special effect material, so that the user-defined virtual articles can be customized in the live broadcasting process, the individuation of the virtual articles is improved, the diversity of creating the virtual articles is greatly improved, the interaction enthusiasm and the live broadcasting interactivity of users based on the virtual articles in the live broadcasting process are better improved, the frequent jump of the users in each live broadcasting room is reduced, a large number of invalid stream pulling operations are brought, the waste of system resources is further reduced, and the system performance is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an application environment shown in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating a virtual article processing method according to an example embodiment;
FIG. 3 is a schematic diagram of a custom virtual article creation interface provided in accordance with an exemplary embodiment;
FIG. 4 is a flowchart illustrating another virtual article processing method according to an example embodiment;
FIG. 5 is a block diagram of a virtual article processing apparatus according to an example embodiment;
FIG. 6 is a block diagram of another virtual article processing apparatus according to an example embodiment;
FIG. 7 is a block diagram of an electronic device for virtual article processing, shown in accordance with an exemplary embodiment;
FIG. 8 is a block diagram of another electronic device for virtual article processing, shown according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for presentation, analyzed data, etc.) related to the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Artificial intelligence is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and expand human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, large virtual article handling technologies, operation/interaction systems, mechatronics, and the like.
The scheme provided by the embodiment of the application relates to the technology such as the artificial intelligence voice processing technology, the natural language processing technology, the content generation technology and the like, in particular to the processing such as voice recognition in the voice processing technology, semantic recognition in the natural language processing technology, training of a special effect template generation model in the content generation technology, special effect template generation and the like, and is specifically described by the following embodiments:
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment that may include a terminal 100 and a server 200 according to an exemplary embodiment.
In an alternative embodiment, the terminal 100 may be configured to provide live business services such as virtual item creation, live interaction, etc., to any user. Specifically, the terminal 100 may include, but is not limited to, a smart phone, a desktop computer, a tablet computer, a notebook computer, a smart speaker, a digital assistant, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a smart wearable device, or other type of electronic device, or may be software running on the electronic device, such as an application program, etc. Alternatively, the operating system running on the electronic device may include, but is not limited to, an android system, an IOS system, linux, windows, and the like.
In an alternative embodiment, the server 200 may provide background services for the terminal 100. Specifically, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides a cloud computing service.
In addition, it should be noted that, fig. 1 is only an application environment provided by the present disclosure, and in practical application, other application environments may also be included, for example, may include more terminals.
In the embodiment of the present disclosure, the terminal 100 and the server 200 may be directly or indirectly connected through a wired or wireless communication manner, which is not limited herein.
Fig. 2 is a flowchart illustrating a virtual article processing method according to an exemplary embodiment, which may be applied to an electronic device such as a server or a terminal, as shown in fig. 2, and may include the following steps:
s201: under the condition that a first direct broadcasting object in a target direct broadcasting room triggers a user-defined virtual article creation instruction carrying text information to be processed, carrying out semantic recognition on the text information to be processed to obtain at least one special effect keyword and at least one material keyword.
In a specific embodiment, the target live room may be any online live room in the live platform, alternatively, the target live room may include a live room initiated by a single main broadcast account, or may be a live room initiated by at least two main broadcasters together (e.g., a fight live room, a live room with wheat, etc.). The first direct broadcast object can be an object initiating a custom virtual article creation instruction in a target direct broadcast room; the object in the embodiment of the application can be a user account. Alternatively, the first direct-broadcast object may be a live broadcast initiation object (a main broadcast account), and the first direct-broadcast object may also be any direct-broadcast browsing object (a viewer account) in the target direct-broadcast room.
In a specific embodiment, the text information to be processed is special effect description information corresponding to the custom virtual article to be generated; the custom virtual item is a custom virtual item, which may be an item capable of interacting in a living room, such as a virtual flower, virtual doll, etc. In practical application, a custom virtual article creation inlet can be arranged in the live broadcast page; optionally, the user corresponding to the first direct broadcast object may enter the custom virtual article creation interface based on the custom virtual article creation entry in the direct broadcast page of the target direct broadcast room, and input the text information to be processed in the custom virtual article creation interface through text input, voice input, image input and other modes. Alternatively, in the case of inputting text information to be processed based on speech, the input speech information may be converted into text information to be processed by a speech recognition technique. Alternatively, in the case of inputting text information to be processed based on an image, the input image may be converted into text information to be processed by a text recognition technique. Further, after the user inputs the text information to be processed, the user-defined virtual article creation instruction can be triggered.
In a specific embodiment, a user-defined virtual article creation interface and a live broadcast interface can be displayed on a live broadcast page in a split screen display mode; and the user-defined virtual article creation interface can be displayed by jumping to a new page, and the live broadcast interface can be correspondingly displayed in a small window mode.
In a specific embodiment, taking split screen presentation as an example, as shown in fig. 3, fig. 3 is a schematic diagram of a custom virtual article creation interface provided according to an exemplary embodiment. Wherein 301 corresponds to a live interface and 302 corresponds to a custom virtual article creation interface.
S203: and carrying out semantic recognition on the text information to be processed to obtain at least one special effect keyword and at least one material keyword.
In a specific embodiment, the at least one special effect keyword may be a keyword indicating a special effect style in the text information to be processed; the at least one material keyword can be a keyword indicating a material required by the special effect in the text information to be processed; optionally, assuming that the text information to be processed is a firework effect formed by roses, putting me and a host in the middle of the firework; accordingly, the at least one special effect keyword may include "firework", "in the middle"; the at least one material keyword may include "rose", "me", "anchor".
In a specific embodiment, the performing semantic recognition on the text information to be processed to obtain at least one special effect keyword and at least one material keyword may include:
inputting the text information to be processed into a preset semantic recognition model to recognize the keyword class, and obtaining at least one special effect keyword and at least one material keyword.
In a specific embodiment, the preset semantic recognition model may be a preset deep learning model for keyword class recognition. Optionally, a specific model structure of the preset semantic recognition model can be set in combination with actual application requirements. Optionally, first performing keyword class recognition training on a first deep learning model to be trained by combining a large amount of first training data to obtain an initial semantic recognition model; and then, carrying out fine tuning training on the initial semantic recognition model by combining the second training data so as to obtain a preset semantic recognition model with better keyword class recognition effect. Optionally, the first training data may include a plurality of first text sample information and preset category labels corresponding to keywords in each first text sample information; the second training data comprises a plurality of second sample text information and preset category labels corresponding to keywords in each second sample text information; optionally, the first sample text information and the second sample text information may include the same sample text information, or may include different sample text information; specifically, the sample text information may be special effect description information of the sample special effect; the preset category labels can represent keyword categories to which corresponding keywords belong, and specifically, the keyword categories can comprise special effect keywords and material keywords; optionally, the preset category label corresponding to the special effect keyword may be (1, 0); the preset category label corresponding to the material keyword may be (0, 1). If the preset category label corresponding to a certain keyword is (1, 0), the probability that the keyword belongs to a special effect keyword is determined to be 1, and the probability that the keyword belongs to a material keyword is determined to be 0; namely, the keyword belongs to a special effect keyword; if the preset category label corresponding to a certain keyword is (0, 1), the probability that the keyword belongs to a special effect keyword is determined to be 0, and the probability that the keyword belongs to a material keyword is determined to be 1; i.e. the keyword belongs to the material keyword.
In a specific embodiment, inputting the text information to be processed into a preset semantic recognition model for keyword class recognition, and obtaining at least one special effect keyword and at least one material keyword may include: inputting the text information to be processed into a preset semantic recognition model to recognize keyword class, and obtaining a prediction class label corresponding to each keyword in the text information to be processed; and combining the prediction category labels corresponding to each keyword to determine at least one special effect keyword and at least one material keyword. Alternatively, the keyword category corresponding to the greater probability in the predicted category label corresponding to each keyword may be used as the keyword category corresponding to the keyword.
In the above embodiment, the keyword category recognition is performed by inputting the text information to be processed into the preset semantic recognition model, and the special effect keywords and the material keywords in the text information to be processed can be rapidly and accurately determined through the semantic recognition of the keywords in the text information to be processed by the model, so that data support is provided for the custom special effect corresponding to the custom virtual object to be created subsequently.
In another optional embodiment, keyword recognition may be performed on the text information to be processed in advance to obtain a plurality of keywords in the text information to be processed; and a plurality of keywords and the text information to be processed are input into a corresponding semantic recognition model together to perform keyword class recognition, so that the semantic recognition model can be combined with context understanding of the text information to be processed on the basis of clear keywords to perform keyword class recognition, and the keyword class recognition accuracy can be better improved. Correspondingly, compared with the preset semantic recognition model, the input in the corresponding semantic recognition model training process can increase keywords in the sample text information.
S205: and acquiring target special effect materials corresponding to the at least one material keyword and target special effect templates matched with the at least one special effect keyword.
In a specific embodiment, the target special effect material corresponding to the at least one material keyword may be a material corresponding to the material keyword; specifically, the material may be a multimedia resource such as text, image, etc. Optionally, where the at least one material keyword includes "rose", "me", "anchor", for example, the target special effects material may include a rose pattern, my head portrait (head portrait of the audience account triggering the custom virtual article creation instruction), anchor head portrait (head portrait of anchor account).
In an optional embodiment, a large number of materials and preset material keywords corresponding to the large number of materials may be stored in the server in advance; correspondingly, the target special effect material can be determined by matching at least one material keyword with a preset material keyword. Optionally, in the case that the executing body is a terminal (a terminal where the first direct broadcast object is located), the terminal may acquire the target special effect material from the server in combination with at least one material keyword.
In a specific embodiment, the target special effect template is generated based on a special effect template generation model, and the special effect template generation model may be a pre-trained generated deep learning model for performing special effect template generation processing. Specifically, a specific model structure of the special effect template generation model can be set in combination with actual application requirements.
In an alternative embodiment, the special effects template generation model may include a base template recognition network and a template generation network; the method may further include:
inputting at least one special effect keyword into a basic template recognition network to perform basic template recognition processing, and determining a preset basic special effect template matched with the at least one special effect keyword;
and inputting the preset basic special effect template into a template generation network to perform template generation processing to obtain the target special effect template.
In a specific embodiment, a base special effects template library may be preset, and the base special effects template library may include a plurality of base special effects templates, such as a particle diffusion special effects template, a particle flash template, an enlarged special effects template, a reduced special effects template, a rotated special effects template, and the like. Specifically, the special effect template may be a rendering code script of the special effect; optionally, the special effect template may be a code script in Json (JavaScriptObject Notation, JS object numbered musical notation) format, so as to ensure the platform universality of the special effect template.
In a specific embodiment, the preset base special effect template matched by any special effect keyword can comprise at least one base special effect template; the basic template recognition network can be used for recognizing basic special effect templates corresponding to special effects of special effect keywords; alternatively, the output of the base template recognition network may be a predicted base effect template tag (a tag of a base effect template indicating effect keyword matching); correspondingly, a corresponding basic special effect template (preset basic special effect template) can be obtained from a basic special effect template library by combining with a predicted basic special effect template label. Specifically, the template generation network may be configured to generate a special effect template corresponding to the special effect keyword based on the basic special effect template matched with the special effect keyword. Optionally, the preset basic special effect template may include at least one basic special effect template; and inputting at least one basic special effect template into a template generation network to perform template generation processing, so that a target special effect template can be obtained. Specifically, the target special effect template may be a rendering code script of the custom special effect.
In a specific embodiment, taking the special effect keyword "firework" as an example, the matched preset basic special effect template can comprise a particle diffusion special effect template and a particle flash special effect template; specifically, after inputting fireworks into the basic template recognition network to perform basic special effect recognition, the output of the basic template recognition network may be: a particle-diffusing special effect template and a label of a particle-flashing special effect template. Correspondingly, a particle diffusion special effect template and a particle flash special effect template can be obtained from a basic special effect template library; and inputting the particle diffusion special effect template and the particle flash special effect template into a template generation network to perform template generation processing, so as to obtain a target special effect template corresponding to the firework.
In a specific embodiment, special effect template generation training may be performed on the second deep learning model to be trained in advance based on a sample special effect keyword (special effect keyword of a sample special effect), a preset basic special effect template tag corresponding to the sample special effect keyword, and a preset special effect template corresponding to the sample special effect, so as to obtain the special effect template generation model; specifically, the second deep learning model to be trained may include a base template recognition network to be trained and a template generation network to be trained. The sample special effect keywords can be input into a basic template recognition network to be trained to recognize basic templates, so that sample basic special effect template labels are obtained; then, inputting a basic special effect template corresponding to the sample basic special effect template label into a template generation network to be trained to perform template generation processing, so as to obtain a sample special effect template; then, according to the sample basic special effect template label and the preset basic special effect template label, determining basic template recognition loss (representing basic template recognition performance of a basic template recognition network to be trained); determining template generation loss (representing template generation performance of a template generation network to be trained) according to the sample special effect template and a preset special effect template; then, the basic template recognition loss and template generation loss can be used for determining the total loss of the second deep learning model to be trained; updating model parameters (namely network parameters in a basic template recognition network to be trained and a template generation network to be trained) by combining a gradient descent method and total loss; and repeating the steps of inputting the sample special effect keywords into a basic template recognition network to be trained for basic template recognition based on the updated second deep learning model to be trained, obtaining a sample basic special effect template label, and updating training iteration steps of model parameters until a preset convergence condition is met, and taking the second deep learning model to be trained when the preset convergence condition is met as a special effect template generation model.
In a specific embodiment, in the foregoing process of determining the loss, a preset loss function may be combined, and specifically, the preset loss function may be a deep learning loss function such as a cross entropy loss function, a mean square error loss function, and the like. The above meeting of the preset convergence condition may be that the total loss is less than or equal to a preset loss threshold, or the number of training iteration steps reaches a preset number of times, or the like, and specifically, the preset loss threshold and the preset number of times may be set in combination with the model precision and the training speed requirement in practical application.
In the above embodiment, the preset basic special effect template matched with the at least one special effect keyword may be determined by inputting the at least one special effect keyword into the basic template recognition network in the special effect template generation model to perform the basic template recognition processing; and inputting a preset basic special effect template into a template generation network in a special effect template generation model to perform template generation processing to obtain a target special effect template, so that special effect keywords can be combined to generate corresponding special effect templates, different special effect generation requirements can be better met, the diversity of special effects is improved, and the diversity of virtual articles can be further improved.
In an alternative embodiment, after the server obtains the target special effect template and the target special effect material, special effect identification information can be generated, and the target special effect template, the target special effect material and the special effect identification information are stored in a corresponding database, so that the target special effect template and the target special effect material can be directly obtained from the database under the condition that the same special effect generation requirement exists subsequently.
S207: determining virtual article configuration information corresponding to the custom virtual article;
in an optional embodiment, after obtaining the target special effect material corresponding to the at least one material keyword and the target special effect template matched with the at least one special effect keyword, the method may further include:
generating a custom special effect corresponding to the custom virtual object based on the target special effect template and the target special effect material;
in a specific embodiment, in a case where the execution subject is a server, generating the custom special effect corresponding to the custom virtual object based on the target special effect template and the target special effect material may include:
issuing a special effect preview instruction carrying a target special effect template and target special effect materials to a first direct broadcasting object;
in a specific embodiment, the special effect preview instruction may be configured to instruct rendering of the custom special effect on the special effect preview interface corresponding to the first direct broadcast object based on the target special effect template and the target special effect material.
In a specific embodiment, the special effect preview interface corresponding to the first direct broadcast object may create a preview area of the special effect in the interface for the custom virtual object. Specifically, the terminal where the first direct-broadcasting object is located can start a local rendering engine, run a target special effect template, and load target special effect materials in combination with script running requirements in the target special effect template to render the custom special effect.
In the above embodiment, by issuing the special effect preview instruction carrying the target special effect template and the target special effect material to the first direct-play object, the user-defined special effect can be conveniently rendered on the basis of the special effect preview interface corresponding to the first direct-play object by the target special effect template and the target special effect material, and further the user can be helped to clearly know the special effect in the subsequent virtual object interaction process.
In an optional embodiment, in the case that the execution subject is a terminal, the custom special effect may be rendered on the special effect preview interface directly based on the target special effect template and the target special effect material acquired in S205.
Correspondingly, the determining the virtual article configuration information corresponding to the custom virtual article includes:
and under the condition that a special effect confirmation instruction aiming at the custom special effect is received, acquiring virtual article configuration information.
In practical application, if the user is satisfied with the effect of the custom special effect, the special effect confirmation instruction can be triggered through a corresponding confirmation control and the like. Otherwise, if the user is not satisfied with the effect of the custom special effect; the text information to be processed may be re-entered and the automatic virtual article creation instruction re-triggered.
In a specific embodiment, the virtual article configuration information may be attribute information of the virtual article; specifically, the virtual article configuration information may include an icon of the custom virtual article, a virtual resource amount corresponding to the custom virtual article (i.e., a virtual resource amount equivalent to the custom virtual article), an article name of the custom virtual article, and the like; optionally, the virtual article configuration information may be generated according to the text information to be processed or the custom special effect, or may be configured by a user. Specifically, in the case that virtual article configuration information is generated according to text information to be processed or a custom special effect, the virtual article configuration information can be returned to the terminal where the first direct broadcast object is located, and the user determines or modifies the virtual article configuration information.
In an optional embodiment, in the case of generating virtual article configuration information according to the text information to be processed or the custom special effect, an icon of the custom virtual article may be generated in combination with a special effect element or an element of the custom special effect in the text information to be processed; taking the example that the text information to be processed in the embodiment is "a firework effect is formed by using the roses", i.e. I and a main player are played in the middle of the firework ", the icons corresponding to the customized virtual articles can be generated by combining the roses and the firework elements, the roses and the firework elements can be consistent with the roses and the firework patterns corresponding to the customized special effects, for example, the thumbnail of the roses and the firework patterns corresponding to the customized special effects can be used as well as the roses and the firework patterns of other types; optionally, the virtual resource amount corresponding to the custom virtual article can be determined by combining the creation difficulty corresponding to the custom special effect, or the special effect keyword amount corresponding to the text information to be processed to determine the virtual resource amount corresponding to the custom virtual article. Optionally, the article name of the customized virtual article can be generated by combining the name corresponding to the customized special effect or the keyword in the text information to be processed; in the above embodiment, the text information to be processed is "a firework effect is formed by roses", i and a master are played in the middle of the firework "for example, and the rose firework can be used as the article name of the customized virtual article.
Further, after virtual article configuration information is generated according to the custom special effect, the virtual article configuration information can be returned to a terminal (the terminal where the first direct broadcast object is located) and displayed to a custom virtual article creation interface; correspondingly, the user can modify and update the configuration information of the virtual article according to the actual requirement, or directly trigger a special effect confirmation instruction aiming at the customized special effect.
In another alternative embodiment, in the case that the user configures the virtual article configuration information, configuration operation information may be presented in the custom virtual article creation interface, and the configuration operation information may be used to perform configuration of the virtual article configuration information.
In the embodiment, the target special effect template and the target special effect material are combined first to generate the customized special effect corresponding to the customized virtual article, and the virtual article configuration information is acquired under the condition that the special effect confirmation instruction aiming at the customized special effect is received, so that the configuration deployment of the virtual article can be carried out on the basis that the user clearly knows the special effect in the interaction process of the subsequent virtual article, and the interaction experience of the user based on the customized virtual article is better promoted.
S209: and deploying the custom virtual object to a second live object in the target live room based on the virtual object configuration information, the target special effect template and the target special effect material.
In a specific embodiment, the second live object may be an object capable of interacting in the target live room based on the custom virtual object; specifically, in the case that the first live object is any live broadcast browsing object in the target live broadcast room, the second live object is any live broadcast browsing object, and optionally, the second live broadcast object may also be any live broadcast browsing object in the target live broadcast room, so that other live broadcast browsing objects can interact based on the user-defined virtual object in combination with own requirements. In the case that the first direct-broadcast object is a direct-broadcast initiating object, the second direct-broadcast object can be at least one direct-broadcast browsing object in the target direct-broadcast room; at least one live broadcast browsing object can be set in combination with actual application requirements, for example, all live broadcast browsing objects in a target live broadcast room can also be part of live broadcast browsing objects in the target live broadcast room.
In an optional embodiment, the deploying the custom virtual object to the second live object in the target live room based on the virtual object configuration information, the target special effects template, and the target special effects material may include:
Creating article identification information corresponding to the customized virtual article according to the virtual article configuration information, the target special effect template and the target special effect material;
and issuing an article deployment instruction carrying article identification information to the second live broadcast object.
In a specific embodiment, the item deployment instruction is configured to instruct to obtain virtual item configuration information based on the item identification information, and render item information corresponding to the custom virtual item on a live virtual item panel corresponding to the second live object based on the virtual item configuration information.
In a specific embodiment, creating the item identification information corresponding to the custom virtual item according to the virtual item configuration information, the target special effect template, and the target special effect material may include: generating article identification information; and establishing corresponding relations among the virtual article configuration information, the target special effect template and the target special effect material and the article identification information respectively, and storing the corresponding relations so as to acquire the virtual article configuration information, the target special effect template and the target special effect material based on the article identification information later.
In an optional embodiment, when the execution subject is a server, the server may create article identification information corresponding to the customized virtual article according to the virtual article configuration information, the target special effect template and the target special effect material, and issue an article deployment instruction carrying the article identification information to the second live broadcast object; optionally, if the execution subject is a terminal, the terminal may send the virtual article configuration information, the target special effect template and the target special effect material to the server, so that the server creates article identification information corresponding to the customized virtual article according to the virtual article configuration information, the target special effect template and the target special effect material; and issuing an article deployment instruction carrying article identification information to the second live broadcast object.
In a specific embodiment, the live virtual item panel corresponding to the second live object may be a virtual item panel in a live page corresponding to the second live object, where the virtual item panel is used to display a virtual item capable of interacting in a live room. Optionally, rendering, on the live virtual item panel corresponding to the second live object, the item information corresponding to the custom virtual item based on the virtual item configuration information may include: the terminal where the second live object is located can start a local rendering engine and render the object information by combining the rendering engine and the virtual object configuration information. Specifically, the item information corresponding to the custom virtual item may be used to trigger the presentation of the custom virtual item to the live broadcast initiation object. Optionally, the item information may be a control for triggering the presentation of the custom virtual item to the live broadcast initiation object; in particular, the item information may include a view of information such as an icon, an item name, etc. in the virtual item configuration information.
In the above embodiment, the virtual article configuration information, the target special effect template and the target special effect material are combined to create the article identification information corresponding to the customized virtual article; and issue the article deployment instruction carrying the article identification information to the second live broadcast object, which can facilitate the subsequent second live broadcast object to acquire the virtual article configuration information based on the article identification information, and render the article information corresponding to the customized virtual article on the live broadcast virtual article panel corresponding to the second live broadcast object based on the virtual article configuration information, so as to realize the deployment of the customized virtual article, and facilitate the user corresponding to the second live broadcast object to interact in the live broadcast room based on the customized virtual article, thereby improving the live broadcast interactivity.
In an optional embodiment, the issuing, to the second live object, an item deployment instruction carrying item identification information may include:
and issuing an article deployment instruction carrying the article identification information and the target aging information to the second live broadcast object.
In a specific embodiment, the target aging information may be used to indicate usage aging of the customized virtual gift. The target aging information may be information that may indicate the usage aging of the custom virtual article, such as one time, a preset number of times (an integer greater than one), a preset period of time (e.g., one day, one week), etc. By carrying the target aging information in the article deployment instruction, the terminal side can conveniently set corresponding use aging when deploying the customized virtual article; optionally, while displaying the article information, prompt information of usage timeliness can be displayed, so that a user can know the usage timeliness of the customized virtual article.
In an optional embodiment, the target aging information may be preset in combination with an actual application requirement, or may be determined in combination with object attribute information of the first direct-play object; optionally, the object attribute information may be object authority information (different objects are preset with authority information corresponding to different usage timelines), object type (different object types correspond to different usage timelines), virtual resource amount of the object on the live platform (different resource amounts correspond to different usage timelines), and other attribute information that may correspond to usage timelines of the custom virtual object.
In the above embodiment, the object deployment instruction carries the target aging information for indicating the usage aging of the custom virtual object, so that the custom virtual object can be conveniently controlled in aging, and further, the interaction enthusiasm and efficiency of the user based on the custom virtual object can be better improved.
In an alternative embodiment, the method may further include:
under the condition that a second live broadcast object is triggered based on the article information and carries a virtual article interaction instruction with article identification information, acquiring a target special effect template and target special effect materials based on the article identification information;
and issuing an object interaction instruction carrying the target special effect template and the target special effect material to a live broadcast browsing object in the target live broadcast room.
In a specific embodiment, the virtual item interaction instruction may instruct the second live object to gift the custom virtual item to the live initiating object in the target live room; specifically, in the case that the target live broadcasting room includes at least two live broadcasting initiation objects, the presentation object of the customized virtual object is a live broadcasting initiation object corresponding to a second live broadcasting object triggering the virtual object interaction instruction in the at least two live broadcasting initiation objects. The item interaction instruction can instruct to render the custom special effect in the target live broadcast room based on the target special effect template and the target special effect material so as to distribute the custom virtual item to the live broadcast initiating object in the target live broadcast room. Optionally, in the case that the second live object is at least one live broadcast browsing object in the target live broadcast room, the receiving a virtual article interaction instruction triggered by the second live broadcast object based on the article information may include receiving a virtual article interaction instruction triggered by any one of the at least one live broadcast browsing object based on the article information.
In a specific embodiment, the second live object may trigger the virtual article interaction instruction by clicking on article information or the like; specifically, the terminal where the second live broadcast object is located can send the virtual article interaction instruction to the server, and correspondingly, the server can acquire the target special effect template and the target special effect material based on the article identification information and issue the article interaction instruction carrying the target special effect template and the target special effect material to the live broadcast browsing object in the target live broadcast room. Specifically, the live broadcast browsing object in the target live broadcast room can be all live broadcast browsing objects in the target live broadcast room, can also be part of live broadcast browsing objects, and can be set in combination with actual application requirements. The live broadcast browsing object in the target live broadcast room comprises a live broadcast browsing object triggering a virtual object interaction instruction.
In the above embodiment, when the second live broadcast object is triggered based on the item information of the customized virtual item and carries the virtual item interaction instruction of the item identification information, the method can be convenient to obtain the target special effect template and the target special effect material corresponding to the customized virtual item based on the item identification information, and issue the item interaction instruction carrying the target special effect template and the target special effect material to the live broadcast browsing object in the target live broadcast room, and can be convenient to render the customized special effect in the target live broadcast room (the live broadcast page corresponding to the live broadcast browsing object in the target live broadcast room) so as to distribute the customized virtual item to the live broadcast initiating object in the target live broadcast room.
In an optional embodiment, the text information to be processed is associated with target interaction information, and interaction index data corresponding to the target interaction information is greater than a preset threshold; the interaction index data represents the interaction heat of the target interaction information on the live broadcast platform.
In a specific embodiment, the interaction index data corresponding to the target interaction information is greater than a preset threshold value, which may indicate that the target interaction information belongs to information of a real-time hot event. The association of the text information to be processed and the target interaction information may be that the text information to be processed contains the target interaction information. Optionally, for example, the real-time hotspot event is a current sport event; accordingly, the target interaction information may be descriptive information related to the athletic event, such as mascot in the athletic event; alternatively, the text information to be processed may include special effect description information including the mascot.
In the above embodiment, the text information to be processed associated with the real-time hot event is the special effect description information of the custom virtual article, so that interactivity and timeliness of the custom virtual article can be better improved, and further, a host can be helped to better promote the interactive atmosphere of the live broadcasting room.
In an alternative embodiment, the method may further include:
and sending the creation prompt information of the customized virtual object to the first direct broadcast object.
In a specific embodiment, the creating prompt information may carry target interaction information, where the creating prompt information is used to prompt creation of a custom virtual article associated with the target interaction information. Specifically, the customized virtual article associated with the target interaction information may be a customized virtual article including the target interaction information, or may be a customized virtual article using the target interaction information as a subject.
In practical application, the creation prompt information can be displayed in a live broadcast page, the creation prompt information can be displayed in a custom virtual article creation interface, and the setting can be performed in combination with practical application requirements.
In the above embodiment, by sending the user-defined virtual article creation prompt information carrying the target interaction information to the first direct broadcast object, the direct broadcast object can be helped to know the real-time hot event in time, so that the direct broadcast object can be conveniently and better created to have the user-defined virtual article with good interaction, and further the direct broadcast interaction atmosphere can be better promoted.
As can be seen from the technical solutions provided by the embodiments of the present specification, in the present specification, when a first direct-broadcast object in a target direct broadcast room triggers a custom virtual article creation instruction, the custom virtual article creation instruction carries text information to be processed, where the text information to be processed is special effect description information corresponding to a custom virtual article to be generated; through semantic recognition of the text information to be processed, special effect keywords and material keywords can be partitioned by combining text expression of a user, and special effect patterns and special effect materials corresponding to the custom virtual articles are accurately recognized; then, acquiring corresponding target special effect materials by combining at least one material keyword, and acquiring a target special effect template corresponding to the special effect keyword generated based on a special effect template generation model, so that automatic generation of code scripts required by special effect rendering can be realized, and the diversity expression requirement of users is met; next, determining virtual article configuration information corresponding to the customized virtual article; the user-defined virtual articles are deployed to the second live object in the target live broadcasting room by combining the virtual article configuration information, the target special effect template and the target special effect material, so that the user-defined virtual articles can be customized in the live broadcasting process, the individuation of the virtual articles is improved, the diversity of creating the virtual articles is greatly improved, the interaction enthusiasm and the live broadcasting interactivity of users based on the virtual articles in the live broadcasting process are better improved, the frequent jump of the users in each live broadcasting room is reduced, a large number of invalid stream pulling operations are brought, the waste of system resources is further reduced, and the system performance is improved.
Fig. 4 is a flowchart illustrating another virtual article processing method, which may be applied to a terminal (a terminal that triggers a custom virtual article creation instruction), according to an exemplary embodiment, as shown in fig. 4, may include the steps of:
s401: responding to a user-defined virtual article creation instruction triggered by a first direct-broadcasting object in a target direct-broadcasting room, wherein the user-defined virtual article creation instruction carries text information to be processed, and acquiring a target special effect template matched with at least one special effect keyword in the text information to be processed and a target special effect material corresponding to at least one material keyword in the text information to be processed;
in a specific embodiment, the text information to be processed is special effect description information corresponding to the custom virtual article to be generated; the at least one special effect keyword and the at least one material keyword are obtained by carrying out semantic recognition on the text information to be processed; in particular, for details of semantic recognition, reference may be made to the above related descriptions, which are not repeated here. The target effect template is generated based on the effect template generation model. Specifically, specific details of generating the target special effect template based on the special effect template generation model can be referred to the above related description, and will not be described herein. The details of obtaining the target special effect material can be referred to the above related description, and will not be described herein.
S403: and determining virtual article configuration information corresponding to the customized virtual article.
In a specific embodiment, details of determining the configuration information of the virtual article may be referred to the above related description, which is not repeated herein.
S405: and sending a special effect confirmation instruction to the server.
In an optional embodiment, after obtaining the target special effect template matched with the at least one special effect keyword in the text information to be processed and the target special effect material corresponding to the at least one material keyword in the text information to be processed, the method may further include:
rendering the custom special effect corresponding to the custom virtual object on the special effect preview interface based on the target special effect template and the target special effect material;
correspondingly, the sending of the special effect confirmation instruction to the server comprises the following steps:
and sending the special effect confirmation instruction to the server under the condition that the special effect confirmation instruction aiming at the custom special effect is detected.
In a particular embodiment, the special effects preview interface may create a preview area of the special effects in the interface for the custom virtual item. Specifically, the local rendering engine can be started, the target special effect template is operated, and the target special effect material is loaded in combination with script operation requirements in the target special effect template so as to render the custom special effect.
In practical application, if the user is satisfied with the effect of the custom special effect, the special effect confirmation instruction can be triggered through a corresponding confirmation control and the like. Otherwise, if the user is not satisfied with the effect of the custom special effect; the text information to be processed may be re-entered and the automatic virtual article creation instruction re-triggered.
In a specific embodiment, the special effects confirmation instruction is configured to instruct deployment of the custom virtual item to the second live object in the target live room based on the target special effects template, the target special effects material, and the virtual item configuration information. The special effect confirmation instruction carries virtual article configuration information; optionally, if the terminal obtains the special effect keyword and the material keyword through semantic recognition, and the terminal combines the special effect keyword and the special effect template to generate a model to generate a target special effect template, and the terminal combines the material keyword to obtain the target special effect material, the special effect confirmation instruction also carries the target special effect template and the target special effect material; otherwise, if the terminal is the target special effect template and the target special effect material obtained from the server; the special effect confirmation instruction may not carry the target special effect template and the target special effect material.
In a specific embodiment, in the case that the first direct-broadcast object is a direct-broadcast initiation object, the second direct-broadcast object is at least one direct-broadcast browsing object in the target direct-broadcast room; and under the condition that the first direct broadcast object is any direct broadcast browsing object in the target direct broadcast room, the second direct broadcast object is any direct broadcast browsing object, and optionally, the second direct broadcast object can also be any direct broadcast browsing object and other direct broadcast browsing objects in the target direct broadcast room, so that the other direct broadcast browsing objects can interact based on the self-defined virtual object by combining with the self-requirements.
In the embodiment, the target special effect template and the target special effect material are combined, the customized special effect corresponding to the customized virtual article is rendered on the special effect preview interface, and the virtual article configuration information is obtained under the condition that the special effect confirmation instruction for the customized special effect is received, so that the configuration deployment of the virtual article can be carried out on the basis that the user clearly knows the special effect in the interaction process of the subsequent virtual article, and the interaction experience of the user based on the customized virtual article is better improved.
In an optional embodiment, in a case where the first live object is any live browsing object in the target live room, and the second live object includes any live browsing object, the method further includes:
Receiving an article deployment instruction issued by a server;
acquiring virtual article configuration information based on the article identification information;
based on the virtual article configuration information, article information corresponding to the customized virtual article is rendered on a live virtual article panel corresponding to any live browsing object.
In a specific embodiment, the article deployment instruction carries article identification information corresponding to the customized virtual article; the article identification information is created based on virtual article configuration information, a target special effect template and target special effect materials; specifically, based on the virtual article configuration information, rendering article information corresponding to the custom virtual article on the live virtual article panel corresponding to any live browsing object may include: and starting a local rendering engine by the terminal where the second live object is located, and rendering the object information by combining the rendering engine and the virtual object configuration information. Specifically, the item information may include a view of the icon, item name, etc. in the virtual item configuration information
In the above embodiment, an article deployment instruction carrying article identification information is received, where the article identification information is created by combining virtual article configuration information, a target special effect template and a target special effect material; the virtual article configuration information can be conveniently obtained based on the article identification information, article information corresponding to the customized virtual article is rendered on the live broadcast virtual article panel based on the virtual article configuration information, the deployment of the customized virtual article can be realized, and users corresponding to the second live broadcast object can interact in the live broadcast room based on the customized virtual article, so that live broadcast interactivity is improved.
In an optional embodiment, the article deployment instruction may further carry target aging information;
in a specific embodiment, the target aging information may be used to indicate usage aging of the customized virtual gift. The target aging information may be information that may indicate the usage aging of the custom virtual article, such as one time, a preset number of times (an integer greater than one), a preset period of time (e.g., one day, one week), etc. By carrying the target aging information in the article deployment instruction, the terminal side can conveniently set corresponding use aging when deploying the customized virtual article; optionally, while displaying the article information, prompt information of usage timeliness can be displayed, so that a user can know the usage timeliness of the customized virtual article.
In an optional embodiment, the target aging information may be preset in combination with an actual application requirement, or may be determined in combination with object attribute information of the first direct-play object; optionally, the object attribute information may be object authority information (different objects are preset with authority information corresponding to different usage timelines), object type (different object types correspond to different usage timelines), virtual resource amount of the object on the live platform (different resource amounts correspond to different usage timelines), and other attribute information that may correspond to usage timelines of the custom virtual object.
In the above embodiment, the object deployment instruction carries the target aging information for indicating the usage aging of the custom virtual object, so that the custom virtual object can be conveniently controlled in aging, and further, the interaction enthusiasm and efficiency of the user based on the custom virtual object can be better improved.
In an alternative embodiment, the method may further include:
receiving an item interaction instruction sent by a server under the condition that a second live object triggers a virtual item interaction instruction based on item information corresponding to a user-defined virtual item, wherein the item interaction instruction carries a target special effect template and target special effect materials;
and rendering the custom special effect on a live broadcast page of the target live broadcast room based on the target special effect template and the target special effect material so as to distribute the custom virtual object to a live broadcast initiating object in the target live broadcast room.
In a particular embodiment, the item information may include a view of information such as an icon, item name, etc. in the virtual item configuration information. Specifically, the specific details of rendering the custom special effects on the live broadcast page corresponding to any live broadcast browsing object based on the target special effect template and the target special effect material to distribute the custom virtual objects to the live broadcast initiating objects in the target live broadcast room can be referred to the above related description, and will not be described herein.
In the above embodiment, under the condition that the second live object is triggered based on the item information of the customized virtual item and the virtual item interaction instruction, the item interaction instruction which is issued by the server and carries the target special effect template and the target special effect material may be received, so that the customized special effect is rendered on the live page, and the customized virtual item is distributed to the live broadcast initiating object in the target live broadcast room.
According to the technical scheme provided by the embodiment of the specification, the specification responds to the user-defined virtual article creation instruction carrying the text information to be processed triggered by the first direct broadcasting object in the target direct broadcasting room, and the target special effect template matched with at least one special effect keyword in the text information to be processed and the target special effect material corresponding to at least one material keyword in the text information to be processed can be obtained; the at least one special effect keyword and the at least one material keyword are obtained by carrying out semantic recognition on text information to be processed; the target special effect template is generated based on a special effect template generation model, and can be combined with text expression of a user to divide special effect keywords and material keywords, so that special effect patterns and special effect materials corresponding to the custom virtual articles can be accurately identified; the method realizes the generation of a custom special effect template based on special effect keywords, and can meet the diversity expression requirement of users; and then, the configuration information of the virtual article corresponding to the customized virtual article is determined, and a special effect confirmation instruction is sent to the server, so that the server can conveniently deploy the customized virtual article to a second live object in the target live broadcasting room by combining the configuration information of the virtual article, the target special effect template and the target special effect material, the customization of the virtual article in the live broadcasting process can be realized, the individuation of the virtual article is improved, the diversity of creating the virtual article is greatly improved, the interaction enthusiasm and live broadcasting interactivity of users based on the virtual article in the live broadcasting process are better improved, frequent jump of users between live broadcasting rooms is reduced, a large number of ineffective stream pulling operations are brought, and further, the waste of system resources is also reduced, and the system performance is improved.
With respect to the method in the above embodiments, the specific details of the steps have been described in the related embodiments regarding the method, and will not be described in detail herein.
Fig. 5 is a block diagram of a virtual article processing apparatus, according to an example embodiment. Referring to fig. 5, the apparatus includes:
the semantic identification module 510 is configured to perform semantic identification on the text information to be processed under the condition that a first direct broadcast object in the target direct broadcast room triggers and a user-defined virtual article creation instruction carrying the text information to be processed is carried, so as to obtain at least one special effect keyword and at least one material keyword; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated;
the special effect data obtaining module 520 is configured to obtain a target special effect material corresponding to the at least one material keyword and a target special effect template matched with the at least one special effect keyword, wherein the target special effect template is generated based on the special effect template generation model;
a first virtual article configuration information determining module 530 configured to perform determining virtual article configuration information corresponding to the custom virtual article;
the custom virtual item deployment module 540 is configured to perform deploying the custom virtual item to the second live object in the target live room based on the virtual item configuration information, the target special effect template and the target special effect material;
Wherein, under the condition that the first direct broadcast object is a direct broadcast initiation object, the second direct broadcast object is at least one direct broadcast browsing object in the target direct broadcast room; and when the first direct broadcast object is any direct broadcast browsing object in the target direct broadcast room, the second direct broadcast object is any direct broadcast browsing object.
In an alternative embodiment, custom virtual article deployment module 550 includes:
the article identification information creation unit is configured to execute the creation of article identification information corresponding to the customized virtual article according to the virtual article configuration information, the target special effect template and the target special effect material;
the article deployment instruction issuing unit is configured to execute the article deployment instruction carrying the article identification information to the second live broadcast object; the article deployment instruction is used for indicating to acquire virtual article configuration information based on the article identification information and rendering article information corresponding to the customized virtual article on a live broadcast virtual article panel corresponding to the second live broadcast object based on the virtual article configuration information.
In an optional embodiment, the article deployment instruction issuing unit is specifically configured to execute issuing an article deployment instruction carrying article identification information and target aging information to the second live object;
The target aging information is used for indicating the usage aging of the customized virtual gift.
In an alternative embodiment, the apparatus further comprises:
the data acquisition module is configured to acquire a target special effect template and target special effect materials based on the article identification information under the condition that the second live object is triggered based on the article information and carries a virtual article interaction instruction of the article identification information;
the object interaction instruction issuing module is configured to execute the live broadcast object in the target live broadcast room and issue an object interaction instruction carrying the target special effect template and the target special effect material, and the object interaction instruction instructs the user-defined special effect to be rendered in the target live broadcast room based on the target special effect template and the target special effect material so as to distribute the user-defined virtual object to the live broadcast initiating object in the target live broadcast room.
In an alternative embodiment, the special effects template generation model includes a base template recognition network and a template generation network; the device further comprises:
the preset basic special effect template determining unit is configured to perform basic template identification processing by inputting at least one special effect keyword into a basic template identification network, and determine a preset basic special effect template matched with the at least one special effect keyword;
The template generation processing unit is configured to execute template generation processing by inputting a preset basic special effect template into a template generation network to obtain a target special effect template.
In an alternative embodiment, the semantic recognition module 510 is specifically configured to perform keyword class recognition by inputting the text information to be processed into a preset semantic recognition model, so as to obtain at least one special effect keyword and at least one material keyword.
In an alternative embodiment, the text information to be processed is associated with target interaction information, and the interaction index data corresponding to the target interaction information is greater than a preset threshold; the interaction index data represents the interaction heat of the target interaction information on the live broadcast platform.
In an alternative embodiment, the apparatus further comprises:
the creating prompt information sending module is configured to execute creating prompt information for sending the customized virtual article to the first direct broadcast object, the creating prompt information carries target interaction information, and the creating prompt information is used for prompting creation of the customized virtual article associated with the target interaction information.
In an alternative embodiment, the apparatus further comprises:
the custom special effect generation module is configured to execute the custom special effect corresponding to the custom virtual object based on the target special effect template and the target special effect material after the target special effect material corresponding to the at least one material keyword and the target special effect template matched with the at least one special effect keyword are acquired;
The first virtual article configuration information determining module is specifically configured to execute obtaining virtual article configuration information when receiving a special effect confirmation instruction for a custom special effect.
In an alternative embodiment, the custom effect generation module includes:
the special effect preview instruction issuing unit is configured to execute issuing a special effect preview instruction carrying a target special effect template and target special effect materials to the first direct broadcasting object;
the special effect preview instruction is used for indicating to render the custom special effect on the special effect preview interface corresponding to the first direct broadcasting object based on the target special effect template and the target special effect material.
In an optional embodiment, the virtual article configuration information is generated according to text information to be processed or a custom special effect corresponding to the custom virtual article; the virtual article configuration information comprises icons of the customized virtual articles, virtual resource amounts corresponding to the customized virtual articles and article names of the customized virtual articles.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
FIG. 6 is a block diagram of another virtual article processing apparatus according to an example embodiment. Referring to fig. 6, the apparatus includes:
the special effect information obtaining module 610 is configured to execute a user-defined virtual article creation instruction carrying text information to be processed triggered by a first direct broadcast object in a target direct broadcast room, and obtain a target special effect template matched with at least one special effect keyword in the text information to be processed and a target special effect material corresponding to at least one material keyword in the text information to be processed; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated; the at least one special effect keyword and the at least one material keyword are obtained by carrying out semantic recognition on the text information to be processed; the target special effect template is generated based on the special effect template generation model;
a second virtual article configuration information determining module 620 configured to perform determining virtual article configuration information corresponding to the custom virtual article;
the special effect confirmation instruction sending module 630 is configured to send a special effect confirmation instruction to the server, where the special effect confirmation instruction is used to instruct to deploy the custom virtual object to the second live object in the target live room based on the target special effect template, the target special effect material and the virtual object configuration information.
In an optional embodiment, in a case where the first live object is any live browsing object in the target live room, and the second live object includes any live browsing object, the apparatus further includes:
the article deployment instruction receiving module is configured to execute and receive an article deployment instruction issued by the server; the article deployment instruction carries article identification information corresponding to the customized virtual article; the article identification information is created based on virtual article configuration information, a target special effect template and target special effect materials;
a virtual article configuration information acquisition module configured to perform acquisition of virtual article configuration information based on article identification information;
and the article information rendering module is configured to execute the rendering of the article information corresponding to the customized virtual article on the live broadcast virtual article panel corresponding to any live broadcast browsing object based on the virtual article configuration information.
In an alternative embodiment, the item deployment instruction also carries target aging information;
the target aging information is used for indicating the usage aging of the customized virtual gift.
In an alternative embodiment, the apparatus further comprises:
the article interaction instruction receiving module is configured to execute an article interaction instruction sent by the server under the condition that the second live object triggers the virtual article interaction instruction based on article information corresponding to the customized virtual article, wherein the article interaction instruction carries a target special effect template and target special effect materials;
The first custom effect rendering module is configured to execute the custom effect rendering on the live broadcast page of the target live broadcast room based on the target effect template and the target effect material so as to distribute the custom virtual object to the live broadcast initiating object in the target live broadcast room.
In an alternative embodiment, the apparatus further comprises:
the second custom effect rendering module is configured to execute the custom effect corresponding to the custom virtual object on the special effect preview interface based on the target special effect template and the target special effect material after the target special effect template matched with at least one special effect keyword in the text information to be processed and the target special effect material corresponding to at least one material keyword in the text information to be processed are acquired;
the special effect confirmation instruction transmitting module is specifically configured to execute the special effect confirmation instruction to the server when the special effect confirmation instruction for the custom special effect is detected.
In an optional embodiment, the virtual article configuration information is generated according to text information to be processed or a custom special effect corresponding to the custom virtual article; the virtual article configuration information comprises icons of the customized virtual articles, virtual resource amounts corresponding to the customized virtual articles and article names of the customized virtual articles.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 7 is a block diagram illustrating an electronic device for virtual article processing, which may be a terminal, according to an exemplary embodiment, and an internal structure diagram thereof may be as shown in fig. 7. The electronic device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the electronic device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a virtual article handling method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Fig. 8 is a block diagram of another electronic device for virtual article handling, which may be a server, whose internal structure may be as shown in fig. 8, according to an example embodiment. The electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the electronic device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a virtual article handling method.
It will be appreciated by those skilled in the art that the structures shown in fig. 7 or 8 are merely block diagrams of portions of structures related to the disclosed aspects and do not constitute limitations of the electronic devices to which the disclosed aspects may be applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an exemplary embodiment, there is also provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the virtual article processing method as in the embodiments of the present disclosure.
In an exemplary embodiment, a computer readable storage medium is also provided, which when executed by a processor of an electronic device, enables the electronic device to perform the virtual article processing method in the embodiments of the present disclosure.
In an exemplary embodiment, a computer program product containing instructions that, when run on a computer, cause the computer to perform the virtual article processing method in the embodiments of the present disclosure is also provided.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (21)

1. A virtual article processing method, comprising:
under the condition that a first direct broadcasting object in a target direct broadcasting room triggers a user-defined virtual article creation instruction carrying text information to be processed, carrying out semantic recognition on the text information to be processed to obtain at least one special effect keyword and at least one material keyword; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated;
Acquiring a target special effect material corresponding to the at least one material keyword and a target special effect template matched with the at least one special effect keyword, wherein the target special effect template is generated based on a special effect template generation model, and the target special effect template is a rendering code script of a custom special effect corresponding to the custom virtual article;
determining virtual article configuration information corresponding to the customized virtual article;
and deploying the custom virtual object to a second live object in the target live room based on the virtual object configuration information, the target special effect template and the target special effect material.
2. The virtual item processing method of claim 1, wherein the deploying the custom virtual item to the second live object in the target live room based on the virtual item configuration information, the target special effects template, and the target special effects material comprises:
creating article identification information corresponding to the customized virtual article according to the virtual article configuration information, the target special effect template and the target special effect material;
issuing an article deployment instruction carrying the article identification information to the second live broadcast object; the article deployment instruction is used for indicating to acquire the virtual article configuration information based on the article identification information and rendering the article information corresponding to the customized virtual article on the live broadcast virtual article panel corresponding to the second live broadcast object based on the virtual article configuration information.
3. The virtual item processing method according to claim 2, wherein the issuing, to the second live object, an item deployment instruction carrying the item identification information includes:
issuing the article deployment instruction carrying the article identification information and the target aging information to the second live broadcast object;
the target aging information is used for indicating the usage aging of the custom virtual article.
4. The virtual article handling method of claim 2, wherein the method further comprises:
under the condition that the second live broadcast object is triggered based on the article information and carries a virtual article interaction instruction of the article identification information, acquiring the target special effect template and the target special effect material based on the article identification information;
issuing an article interaction instruction carrying the target special effect template and the target special effect material to a live object in the target live broadcasting room, wherein the article interaction instruction instructs rendering of the custom special effect in the target live broadcasting room based on the target special effect template and the target special effect material so as to distribute the custom virtual article to a live broadcasting initiating object in the target live broadcasting room.
5. The virtual article processing method of claim 1, wherein the special effects template generation model comprises a base template recognition network and a template generation network; the method further comprises the steps of:
inputting the at least one special effect keyword into the basic template recognition network to perform basic template recognition processing, and determining a preset basic special effect template matched with the at least one special effect keyword;
inputting the preset basic special effect template into the template generation network to perform template generation processing to obtain the target special effect template.
6. The method for processing a virtual article according to claim 1, wherein the performing semantic recognition on the text information to be processed to obtain at least one special effect keyword and at least one material keyword includes:
inputting the text information to be processed into a preset semantic recognition model for keyword class recognition to obtain the at least one special effect keyword and the at least one material keyword.
7. The virtual article processing method according to claim 1, wherein the text information to be processed is associated with target interaction information, and interaction index data corresponding to the target interaction information is greater than a preset threshold; and the interaction index data represents the interaction heat of the target interaction information on the live broadcast platform.
8. The virtual article handling method of claim 7, further comprising:
and sending creation prompt information of the customized virtual article to the first direct broadcast object, wherein the creation prompt information carries the target interaction information, and the creation prompt information is used for prompting the creation of the customized virtual article associated with the target interaction information.
9. The virtual article processing method according to any one of claims 1 to 8, wherein after the obtaining the target special effects material corresponding to the at least one material keyword and the target special effects template matching the at least one special effects keyword, the method further comprises:
generating a custom special effect corresponding to the custom virtual article based on the target special effect template and the target special effect material;
the determining the virtual article configuration information corresponding to the customized virtual article comprises the following steps:
and under the condition that a special effect confirmation instruction aiming at the custom special effect is received, acquiring the virtual article configuration information.
10. The method of claim 9, wherein generating the custom special effect corresponding to the custom virtual item based on the target special effect template and the target special effect material comprises:
Issuing a special effect preview instruction carrying the target special effect template and the target special effect material to the first direct broadcasting object;
the special effect preview instruction is used for indicating to render the custom special effect on the special effect preview interface corresponding to the first direct broadcast object based on the target special effect template and the target special effect material.
11. The virtual article processing method according to any one of claims 1 to 8, wherein the virtual article configuration information is generated according to the text information to be processed or a custom special effect corresponding to the custom virtual article; the virtual article configuration information comprises an icon of the customized virtual article, a virtual resource amount corresponding to the customized virtual article and an article name of the customized virtual article.
12. A virtual article processing method, comprising:
responding to a user-defined virtual article creation instruction triggered by a first direct-broadcasting object in a target direct-broadcasting room, wherein the user-defined virtual article creation instruction carries text information to be processed, and acquiring a target special effect template matched with at least one special effect keyword in the text information to be processed and a target special effect material corresponding to at least one material keyword in the text information to be processed; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated; the at least one special effect keyword and the at least one material keyword are obtained by carrying out semantic recognition on the text information to be processed; the target special effect template is generated based on a special effect template generation model, and is a rendering code script of the special effect corresponding to the custom virtual article;
Determining virtual article configuration information corresponding to the customized virtual article;
and sending a special effect confirmation instruction to a server, wherein the special effect confirmation instruction is used for indicating that the custom virtual object is deployed to a second live object in the target live broadcasting room based on the target special effect template, the target special effect material and the virtual object configuration information.
13. The virtual item processing method of claim 12, wherein, in the case where the first live object is any live view object in the target live room and the second live object includes the any live view object, the method further comprises:
receiving an article deployment instruction issued by the server; the article deployment instruction carries article identification information corresponding to the customized virtual article; the article identification information is created based on the virtual article configuration information, the target special effect template and the target special effect material;
acquiring the virtual article configuration information based on the article identification information;
and rendering the article information corresponding to the customized virtual article on the live broadcast virtual article panel corresponding to any live broadcast browsing object based on the virtual article configuration information.
14. The virtual article handling method of claim 13, wherein the article deployment instruction further carries target aging information;
the target aging information is used for indicating the usage aging of the custom virtual article.
15. The virtual article handling method of claim 12, wherein the method further comprises:
receiving an item interaction instruction sent by the server under the condition that the second live object triggers a virtual item interaction instruction based on item information corresponding to the customized virtual item, wherein the item interaction instruction carries the target special effect template and the target special effect material;
and rendering the custom special effect on a live broadcast page of the target live broadcast room based on the target special effect template and the target special effect material so as to distribute the custom virtual object to a live broadcast initiating object in the target live broadcast room.
16. The virtual article processing method according to claim 12, wherein after the target special effects template matching with at least one special effects keyword in the text information to be processed and the target special effects material corresponding to at least one material keyword in the text information to be processed are obtained, the method further comprises:
Rendering the custom special effect corresponding to the custom virtual object on a special effect preview interface based on the target special effect template and the target special effect material;
the sending the special effect confirmation instruction to the server comprises the following steps:
and sending the special effect confirmation instruction to a server under the condition that the special effect confirmation instruction aiming at the customized special effect is detected.
17. The virtual article processing method according to any one of claims 12 to 16, wherein the virtual article configuration information is generated according to the text information to be processed or a custom special effect corresponding to the custom virtual article; the virtual article configuration information comprises an icon of the customized virtual article, a virtual resource amount corresponding to the customized virtual article and an article name of the customized virtual article.
18. A virtual article handling apparatus, comprising:
the semantic recognition module is configured to execute a first direct broadcast object trigger in a target direct broadcast room, and performs semantic recognition on the text information to be processed under the condition of carrying a user-defined virtual article creation instruction of the text information to be processed to obtain at least one special effect keyword and at least one material keyword; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated;
The special effect data acquisition module is configured to execute and acquire a target special effect material corresponding to the at least one material keyword and a target special effect template matched with the at least one special effect keyword, the target special effect template is generated based on a special effect template generation model, and the target special effect template is a rendering code script of a custom special effect corresponding to the custom virtual article;
the first virtual article configuration information determining module is configured to determine virtual article configuration information corresponding to the customized virtual article;
and the custom virtual article deployment module is configured to perform the deployment of the custom virtual article to a second live object in the target live room based on the virtual article configuration information, the target special effect template and the target special effect material.
19. A virtual article handling apparatus, comprising:
the special effect information acquisition module is configured to execute a user-defined virtual article creation instruction which is triggered by a first direct broadcast object in a target direct broadcast room and carries text information to be processed, and acquire a target special effect template matched with at least one special effect keyword in the text information to be processed and target special effect materials corresponding to at least one material keyword in the text information to be processed; the text information to be processed is special effect description information corresponding to the custom virtual article to be generated; the at least one special effect keyword and the at least one material keyword are obtained by carrying out semantic recognition on the text information to be processed; the target special effect template is generated based on a special effect template generation model;
The second virtual article configuration information determining module is configured to execute a rendering code script for determining virtual article configuration information corresponding to the custom virtual article, wherein the target special effect template is a custom special effect corresponding to the custom virtual article;
and the special effect confirmation instruction sending module is configured to send the special effect confirmation instruction to a server, wherein the special effect confirmation instruction is used for indicating that the custom virtual object is deployed to a second live object in the target live room based on the target special effect template, the target special effect material and the virtual object configuration information.
20. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the virtual article handling method of any one of claims 1 to 17.
21. A computer readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the virtual article handling method of any one of claims 1 to 17.
CN202310638533.1A 2023-05-31 2023-05-31 Virtual article processing method and device, electronic equipment and storage medium Active CN116366909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310638533.1A CN116366909B (en) 2023-05-31 2023-05-31 Virtual article processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310638533.1A CN116366909B (en) 2023-05-31 2023-05-31 Virtual article processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116366909A CN116366909A (en) 2023-06-30
CN116366909B true CN116366909B (en) 2023-10-17

Family

ID=86934977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310638533.1A Active CN116366909B (en) 2023-05-31 2023-05-31 Virtual article processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116366909B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156503A (en) * 2017-12-14 2018-06-12 北京奇艺世纪科技有限公司 A kind of method and device for generating present
CN111277854A (en) * 2020-03-04 2020-06-12 网易(杭州)网络有限公司 Display method and device of virtual live broadcast room, electronic equipment and storage medium
CN112087655A (en) * 2020-08-07 2020-12-15 广州华多网络科技有限公司 Method and device for presenting virtual gift and electronic equipment
CN112087669A (en) * 2020-08-07 2020-12-15 广州华多网络科技有限公司 Method and device for presenting virtual gift and electronic equipment
CN113596508A (en) * 2021-08-11 2021-11-02 广州方硅信息技术有限公司 Virtual gift presenting method, device, medium and computer equipment of live broadcast room
WO2023061461A1 (en) * 2021-10-14 2023-04-20 北京字跳网络技术有限公司 Special effect playback method and system for live broadcast room, and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156503A (en) * 2017-12-14 2018-06-12 北京奇艺世纪科技有限公司 A kind of method and device for generating present
CN111277854A (en) * 2020-03-04 2020-06-12 网易(杭州)网络有限公司 Display method and device of virtual live broadcast room, electronic equipment and storage medium
CN112087655A (en) * 2020-08-07 2020-12-15 广州华多网络科技有限公司 Method and device for presenting virtual gift and electronic equipment
CN112087669A (en) * 2020-08-07 2020-12-15 广州华多网络科技有限公司 Method and device for presenting virtual gift and electronic equipment
CN113596508A (en) * 2021-08-11 2021-11-02 广州方硅信息技术有限公司 Virtual gift presenting method, device, medium and computer equipment of live broadcast room
WO2023061461A1 (en) * 2021-10-14 2023-04-20 北京字跳网络技术有限公司 Special effect playback method and system for live broadcast room, and device

Also Published As

Publication number Publication date
CN116366909A (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US20240107127A1 (en) Video display method and apparatus, video processing method, apparatus, and system, device, and medium
KR102117433B1 (en) Interactive video generation
US20230057566A1 (en) Multimedia processing method and apparatus based on artificial intelligence, and electronic device
WO2014204987A1 (en) Method and apparatus for customized software development kit (sdk) generation
JP7240505B2 (en) Voice packet recommendation method, device, electronic device and program
WO2017219967A1 (en) Virtual keyboard generation method and apparatus
CN114036439A (en) Website building method, device, medium and electronic equipment
CN108304434B (en) Information feedback method and terminal equipment
CN116188250A (en) Image processing method, device, electronic equipment and storage medium
CN114116086A (en) Page editing method, device, equipment and storage medium
CN113722638B (en) Page display method and device, electronic equipment and storage medium
CN112286486B (en) Operation method of application program on intelligent terminal, intelligent terminal and storage medium
CN116701811B (en) Webpage processing method, device, equipment and computer readable storage medium
CN113596529A (en) Terminal control method and device, computer equipment and storage medium
CN116366909B (en) Virtual article processing method and device, electronic equipment and storage medium
CN117939190A (en) Method for generating video content and music content with soundtrack and electronic equipment
CN108287707A (en) JSX document generating methods, device, storage medium and computer equipment
CN105706023A (en) Communicating with unsupported input device
KR102040392B1 (en) Method for providing augmented reality contents service based on cloud
CN113438532B (en) Video processing method, video playing method, video processing device, video playing device, electronic equipment and storage medium
CN113709575B (en) Video editing processing method and device, electronic equipment and storage medium
KR20190094879A (en) Method and apparatus for producing modular content for outdoor augmented reality services
CN115687816A (en) Resource processing method and device
CN113987142A (en) Voice intelligent interaction method, device, equipment and storage medium with virtual doll
CN113867874A (en) Page editing and displaying method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant