CN113485596B - Virtual model processing method and device, electronic equipment and storage medium - Google Patents

Virtual model processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113485596B
CN113485596B CN202110767415.1A CN202110767415A CN113485596B CN 113485596 B CN113485596 B CN 113485596B CN 202110767415 A CN202110767415 A CN 202110767415A CN 113485596 B CN113485596 B CN 113485596B
Authority
CN
China
Prior art keywords
expression
virtual model
picture
interface
expression selection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110767415.1A
Other languages
Chinese (zh)
Other versions
CN113485596A (en
Inventor
王众怡
王可欣
孙佳佳
马里千
张国鑫
李秋帆
李曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amusement Starcraft Beijing Technology Co ltd
Original Assignee
Amusement Starcraft Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amusement Starcraft Beijing Technology Co ltd filed Critical Amusement Starcraft Beijing Technology Co ltd
Priority to CN202110767415.1A priority Critical patent/CN113485596B/en
Publication of CN113485596A publication Critical patent/CN113485596A/en
Application granted granted Critical
Publication of CN113485596B publication Critical patent/CN113485596B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a virtual model processing method, a virtual model processing device, electronic equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: in response to detecting the virtual model with the target label, adding an expression selection control on the expression selection interface, and establishing an association relationship between the expression selection control and the virtual model; responding to a first triggering operation of the expression selection control, and acquiring an expression picture based on the virtual model; and displaying the expression selection control and the expression picture in a correlated manner in the expression selection interface. According to the technical scheme, the expression selection control is added to the expression selection interface, so that after the expression selection control is triggered, the expression picture can be displayed in the expression selection interface, the user can determine the expression presented by the virtual model based on the expression picture, the user can conveniently and rapidly check and manage the virtual model, and the man-machine interaction efficiency is improved.

Description

Virtual model processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method, a device, an electronic device and a storage medium for a virtual model.
Background
With the development of computer technology, the application of a virtual model, which is a three-dimensional model representing an avatar, is becoming more and more widespread. For example, in a live scene, a host may use a virtual model to live video instead of a real avatar.
At present, more and more virtual models can be used by a host in live broadcast scenes and the like, and different virtual models have different images, expressions, actions and the like, so that how to enable the host to conveniently and quickly view and manage the virtual models is a problem to be solved urgently.
Disclosure of Invention
The invention provides a processing method, a processing device, electronic equipment and a storage medium for a virtual model, so that a user can conveniently and rapidly check and manage the virtual model, and the man-machine interaction efficiency is improved. The technical scheme of the present disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a method comprising:
in response to detecting a virtual model with a target label, adding an expression selection control on an expression selection interface, and establishing an association relationship between the expression selection control and the virtual model, wherein the target label is used for indicating the virtual model to present a custom type expression;
Responding to a first triggering operation of the expression selection control, and acquiring an expression picture based on the virtual model, wherein the expression picture is used for displaying the expression presented by the virtual model;
and displaying the expression selection control and the expression picture in a correlated mode in the expression selection interface.
In some embodiments, the obtaining, in response to a first trigger operation of the expression selection control, an expression picture based on the virtual model includes:
responding to the first triggering operation of the expression selection control, and displaying the virtual model;
and capturing a picture containing the head area from the virtual model to obtain the expression picture.
In some embodiments, the capturing a picture including a head region from the virtual model includes:
identifying a head region of the virtual model;
and based on target size information, capturing a picture containing the head region, wherein the target size information is used for indicating the size of the expression picture.
In some embodiments, the capturing a picture including a head region from the virtual model includes:
displaying an interception prompt box, wherein the interception prompt box is used for indicating an area to be intercepted;
Based on the adjustment operation of the interception prompt box, adjusting the relative position of the interception prompt box and the virtual model;
and responding to the intercepting operation, intercepting the picture containing the area indicated by the intercepting prompt box.
In some embodiments, the custom type of expression is a dynamic expression composed of static expressions contained by multiple layers;
after the expression selection control and the expression picture are displayed in an associated mode in the expression selection interface, the processing method of the virtual model further comprises the following steps:
responding to a second triggering operation of the expression selection control, displaying a parameter setting interface, wherein the parameter setting interface is used for setting presentation parameters of the dynamic expression, and the presentation parameters comprise at least one of the playing speed of a single layer, the circulation times and the circulation intervals of a plurality of layers;
and determining target presentation parameters of the expression presented by the virtual model based on the parameter setting operation of the parameter setting interface.
In some embodiments, the processing method of the virtual model further includes:
and responding to a third triggering operation of the expression selection control, and displaying the virtual model presenting the dynamic expression on a live interface based on the target presentation parameter.
In some embodiments, before the adding of the expression selection control in the expression selection interface, the processing method of the virtual model further includes:
displaying prompt information, wherein the prompt information is used for prompting the existence of the virtual model and the expression selection interface is required to be checked;
and responding to the confirmation operation of the prompt information, and displaying the expression selection interface.
In some embodiments, the processing method of the virtual model further includes:
based on a layer file input in a model creation interface, sending a model creation request to a server, wherein the server is used for creating at least one virtual model based on the layer file carried by the model creation request, adding the target label for the virtual model presenting the expression of the custom type, and returning the at least one virtual model, wherein the form of the custom type is generated by the server based on the expression layer customized by a user in the layer file;
and performing label detection on at least one virtual model returned by the server, wherein the label detection is used for detecting whether the at least one virtual model has the target label.
In some embodiments, the processing method of the virtual model further includes:
Displaying a model management interface, wherein the model management interface comprises a model creation option and an expression management option;
responding to the triggering operation of the model creation options, and displaying the model creation interface;
and responding to the triggering operation of the expression management options, and displaying the expression selection interface.
In some embodiments, after the expression selection control and the expression picture are displayed in association in the expression selection interface, the processing method of the virtual model further includes:
responsive to a trigger operation of a reacquire control associated with the emoticon, re-intercepting the picture based on the virtual model;
and determining the re-intercepted picture as an updated expression picture.
In some embodiments, the processing method of the virtual model further includes:
and deleting the expression selection control and the expression picture in response to triggering operation of the deletion control associated with the expression picture.
According to a second aspect of the embodiments of the present disclosure, there is provided a processing apparatus for a virtual model, including:
the relation establishing unit is configured to execute the steps of adding an expression selection control on an expression selection interface in response to the detection of a virtual model with a target label, and establishing an association relation between the expression selection control and the virtual model, wherein the target label is used for indicating the virtual model to present a custom type expression;
A picture acquisition unit configured to perform a first trigger operation in response to the expression selection control, and acquire an expression picture based on the virtual model, the expression picture being used for displaying an expression presented by the virtual model;
and the first display unit is configured to execute the associated display of the expression selection control and the expression picture in the expression selection interface.
In some embodiments, the picture acquisition unit comprises:
a display subunit configured to perform displaying the virtual model in response to the first trigger operation on the expression selection control;
and the interception subunit is configured to intercept the picture containing the head area from the virtual model to obtain the expression picture.
In some embodiments, the intercept subunit is configured to perform identifying a header region of the virtual model; and based on target size information, capturing a picture containing the head region, wherein the target size information is used for indicating the size of the expression picture.
In some embodiments, the intercepting subunit is configured to perform displaying an intercepting prompt, where the intercepting prompt is used for indicating an area to be intercepted; based on the adjustment operation of the interception prompt box, adjusting the relative position of the interception prompt box and the virtual model; and responding to the intercepting operation, intercepting the picture containing the area indicated by the intercepting prompt box.
In some embodiments, the custom type of expression is a dynamic expression composed of static expressions contained by multiple layers;
the processing device of the virtual model further comprises:
a second display unit configured to perform a second trigger operation for the expression selection control, and display a parameter setting interface for setting a presentation parameter of the dynamic expression, the presentation parameter including at least one of a play speed of a single layer, a number of cycles of the plurality of layers, and a cycle interval;
and a parameter determining unit configured to perform a parameter setting operation based on the parameter setting interface, and determine a target presentation parameter of the expression presented by the virtual model. .
In some embodiments, the second display unit is further configured to perform displaying the virtual model presenting the dynamic expression on a live interface based on the target presentation parameter in response to a third trigger operation on the expression selection control.
In some embodiments, the processing device of the virtual model further comprises:
the third display unit is configured to display prompt information, wherein the prompt information is used for prompting the existence of the virtual model and the expression selection interface is required to be checked;
The third display unit is further configured to perform a confirmation operation in response to the prompt information, and display the expression selection interface.
In some embodiments, the processing device of the virtual model further comprises:
a request sending unit, configured to execute a layer file input on a model creation interface, and send a model creation request to a server, where the server is configured to create at least one virtual model based on the layer file carried by the model creation request, add the target tag to the virtual model presenting the expression of the custom type, and return the at least one virtual model, where the form of the custom type is generated by the server based on the expression layer custom defined by the user in the layer file;
and the label detection unit is configured to perform label detection on at least one virtual model returned by the server, wherein the label detection is used for detecting whether the at least one virtual model has the target label.
In some embodiments, the processing device of the virtual model further comprises:
a fourth display unit configured to perform display of a model management interface including a model creation option and an expression management option;
The third display unit is further configured to perform a trigger operation in response to the model creation option, and display the model creation interface;
the third display unit is further configured to perform a trigger operation in response to the expression management option, and display the expression selection interface.
In some embodiments, the picture acquisition unit is configured to perform a re-capture of a picture based on the virtual model in response to a trigger operation of a re-capture control associated with the emoticon;
and determining the re-intercepted picture as an updated expression picture.
In some embodiments, the apparatus further comprises:
and the control management unit is configured to execute a trigger operation for responding to the deletion control associated with the expression picture and delete the expression selection control and the expression picture.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising:
one or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the method of processing a virtual model described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the above-described processing method of a virtual model.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by a processor in an electronic device, implement a method of processing a virtual model as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the scheme provided by the embodiment of the disclosure, the expression selection control associated with the virtual model presenting the user-defined expression is added to the expression selection interface, so that after the expression selection control is triggered, an expression picture can be displayed in the expression selection interface, a user can determine the expression presented by the virtual model based on the expression picture, the user can conveniently and rapidly view and manage the virtual model, and the man-machine interaction efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an implementation environment of a method of processing a virtual model, according to an example embodiment.
FIG. 2 is a flow chart illustrating a method of processing a virtual model according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating another method of processing a virtual model according to an exemplary embodiment.
Fig. 4 is a schematic diagram of an emoticon layer, according to an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating a model creation interface, according to an example embodiment.
Fig. 6 is a schematic diagram illustrating a hint information according to an exemplary embodiment.
Fig. 7 is a schematic diagram of an expression selection interface, shown according to an example embodiment.
Fig. 8 is a schematic diagram illustrating capturing of an emoticon according to an exemplary embodiment.
Fig. 9 is a schematic diagram showing a display of an emoticon according to an exemplary embodiment.
FIG. 10 is a schematic diagram illustrating a parameter setting interface, according to an example embodiment.
FIG. 11 is a block diagram of a processing device of a virtual model, according to an example embodiment.
FIG. 12 is a block diagram of a processing device of another virtual model, shown according to an example embodiment.
Fig. 13 is a block diagram of a terminal according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The user information referred to in the present disclosure may be information authorized by the user or sufficiently authorized by each party.
Fig. 1 is a schematic view of an implementation environment of a virtual model processing method, referring to fig. 1, according to an exemplary embodiment, where the implementation environment includes a terminal 101 and a server 102. The terminal 101 is directly or indirectly connected to the server 102 through wired or wireless communication, which is not limited by the embodiment of the present disclosure.
Wherein the terminal 101 is installed and operated with an application program supporting the generation of a display virtual model, which is a three-dimensional model of an avatar including an avatar, a cartoon character, a anthropomorphic animal character, etc. For example, a live application supporting live broadcasting based on a three-dimensional model of an avatar instead of an actual avatar of a host, or a video application supporting recording of a short video based on a three-dimensional model of an avatar, the type of the application is not limited by the embodiments of the present disclosure. Illustratively, the terminal 101 has a user account logged into an application running therein.
In some embodiments, the terminal 101 is, but is not limited to, a smart phone, tablet, notebook, desktop, smart box, smart watch, etc. The terminal 101 refers broadly to one of a plurality of terminals, and this embodiment is illustrated with the terminal 101.
The server 102 is configured to provide a background service for an application program executed by the terminal 101, and the background server includes creating a corresponding virtual model based on a model creation request of the terminal 101.
In some embodiments, the server 102 is a server, a plurality of servers, a cloud server, a cloud computing platform, or a virtualization center, etc., which is not limited by the embodiments of the present disclosure.
Fig. 2 is a flowchart of a method for processing a virtual model, referring to fig. 2, taking an electronic device as an example of the terminal 101, where the method specifically includes the steps of:
in step S201, in response to detecting a virtual model with a target label, the terminal adds an expression selection control on an expression selection interface, establishes an association relationship between the expression selection control and the virtual model, and the target label is used for indicating that the virtual model presents a custom type expression.
In the embodiment of the present disclosure, the terminal installs and runs an application program supporting a virtual model, which is a three-dimensional model of an avatar. The expression selection interface is an interface in the application program, the expression selection interface is displayed with a plurality of expression selection controls, the plurality of expression selection controls are in one-to-one correspondence with a plurality of virtual models presenting different expressions, namely the expressions presented by the virtual models associated with the different expression selection controls are different, and the expressions presented by the plurality of virtual models are the default type expressions. When the terminal detects the virtual model with the target label, a new expression selection control is added to the expression selection interface, and then the association relation between the newly added expression selection control and the virtual model with the target label is established.
It should be noted that the expression selection control may be associated with an expression image, where the expression image is used to display an expression of a virtual model associated with the expression selection control, so that a user can determine an expression of the virtual model associated with each expression selection control based on the expression image, thereby improving efficiency of selecting the virtual model by the user. For the expression selection control users of the unassociated expression pictures, the associated expression pictures can be set for the expression selection control based on the first trigger operation.
In step S202, in response to the first trigger operation of the expression selection control, the terminal obtains an expression picture based on the virtual model, where the expression picture is used to display an expression presented by the virtual model.
In the embodiment of the disclosure, the newly added expression selection control is not associated with an expression picture, and when the terminal detects the first trigger operation of the expression selection control, the terminal can acquire the expression picture based on the virtual model associated with the expression selection control, and because the expression presented by the virtual model is the expression of the custom type, the expression picture presents the expression of the custom type presented by the virtual model, so that the user can intuitively determine the expression of the virtual model associated with the expression selection control based on the expression picture.
In step S203, the terminal displays the expression selection control and the expression picture in association in the expression selection interface.
In the embodiment of the disclosure, since the expression selection control is associated with the virtual model, and the expression picture displays the expression presented by the virtual model, after the terminal acquires the expression picture, the expression picture is displayed in association with the expression selection control, so that a user can conveniently and rapidly view and manage the virtual model based on the expression selection control and the expression picture. Wherein the associated display includes, but is not limited to: and displaying the expression picture and the expression selection control adjacently, or displaying the expression picture in the expression selection control, or displaying the expression selection control on the expression picture.
According to the scheme provided by the embodiment of the disclosure, the expression selection control associated with the virtual model presenting the user-defined expression is added to the expression selection interface, so that after the expression selection control is triggered, an expression picture can be displayed in the expression selection interface, a user can determine the expression presented by the virtual model based on the expression picture, the user can conveniently and rapidly view and manage the virtual model, and the man-machine interaction efficiency is improved.
In some embodiments, the obtaining an expression picture based on the virtual model in response to the first trigger operation of the expression selection control includes:
responding to the first triggering operation of the expression selection control, and displaying the virtual model;
and capturing a picture containing the head area from the virtual model to obtain the expression picture.
By displaying the virtual model and intercepting the picture containing the head area of the virtual model into an expression picture, the expression picture can intuitively reflect the presented expression of the virtual model, so that a user can determine the expression presented by the virtual model according to the expression picture, and the man-machine interaction efficiency is improved.
In some embodiments, the capturing a picture containing a head region from the virtual model includes:
identifying a head region of the virtual model;
based on target size information, capturing a picture containing the head region, wherein the target size information is used for indicating the size of the expression picture.
Based on the target size information, the head area of the virtual model can be automatically identified and intercepted, so that the expression picture acquisition efficiency is improved.
In some embodiments, the capturing a picture containing a head region from the virtual model includes:
Displaying an interception prompt box, wherein the interception prompt box is used for indicating an area to be intercepted;
based on the adjustment operation of the interception prompt box, adjusting the relative position of the interception prompt box and the virtual model;
and responding to the intercepting operation, intercepting the picture containing the area indicated by the intercepting prompt box.
Through displaying the interception prompt box, the user can adjust the position of the interception prompt box according to the requirement, so that the intercepted expression picture is a picture meeting the requirement of the user, the user can determine the expression presented by the virtual model according to the expression picture, and the man-machine interaction efficiency is improved.
In some embodiments, the custom type of expression is a dynamic expression consisting of static expressions contained by multiple layers;
after the expression selection control and the expression picture are displayed in a correlated manner in the expression selection interface, the processing method of the virtual model further comprises the following steps:
responding to a second triggering operation of the expression selection control, displaying a parameter setting interface, wherein the parameter setting interface is used for setting presentation parameters of the dynamic expression, and the presentation parameters comprise at least one of the playing speed of a single layer, the circulation times and the circulation intervals of a plurality of layers;
And determining target presentation parameters of the expression presented by the virtual model based on the parameter setting operation of the parameter setting interface.
Through the display parameter setting interface, a user can set presentation parameters of the expression based on the parameter setting interface, the setting mode is simple and convenient, and the human-computer interaction efficiency is high.
In some embodiments, the processing method of the virtual model further includes:
and responding to a third triggering operation of the expression selection control, and displaying a virtual model presenting the dynamic expression on a live interface based on the presentation parameters at the target.
When the third trigger operation of the expression selection control is detected, the virtual model which presents dynamic expression by the target presentation parameter is displayed on the live interface, so that the aim of switching the expression presented by the virtual model can be fulfilled, the operation is simple and convenient, and the man-machine efficiency is high.
In some embodiments, the processing method of the virtual model further includes, before the expression selection interface adds the expression selection control:
displaying prompt information, wherein the prompt information is used for prompting the existence of the virtual model and the expression selection interface is required to be checked;
and responding to the confirmation operation of the prompt information, and displaying the expression selection interface.
By displaying the prompt information, the user can be prompted that the virtual model with the target label exists currently, and the user needs to view the expression selection interface, so that the user can set the expression picture associated with the expression selection control corresponding to the virtual model with the target label based on the expression selection interface.
In some embodiments, the processing method of the virtual model further includes:
based on a layer file input in a model creation interface, sending a model creation request to a server, wherein the server is used for creating at least one virtual model based on the layer file carried by the model creation request, adding the target label for the virtual model presenting the expression of the custom type, and returning the at least one virtual model, wherein the expression of the custom type is generated by the server based on an expression layer custom by a user in the layer file;
and performing label detection on at least one virtual model returned by the server, wherein the label detection is used for detecting whether the at least one virtual model has the target label.
The method has the advantages that the model creation request carrying the image layer file is sent to the server, the server can create and return the corresponding virtual model, the target label is added for the virtual model presenting the expression of the user-defined type, and then the label detection is carried out on the received virtual model, so that whether each virtual model has the target label can be determined, further, the expression picture associated with the expression selection control can be set, the detection can be automatically carried out without user operation, the processing mode is simple and efficient, and the man-machine interaction efficiency is improved.
In some embodiments, the processing method of the virtual model further includes:
displaying a model management interface, wherein the model management interface comprises a model creation option and an expression management option;
responding to the triggering operation of the model creation option, and displaying the model creation interface;
and responding to the triggering operation of the expression management option, and displaying the expression selection interface.
And the model management interface is displayed, so that a user can perform model creation on the model creation interface by triggering the model creation option, or the user sets an expression picture associated with the expression selection control on the expression selection interface by triggering the expression management option.
In some embodiments, after the expression selection control and the expression picture are displayed in association in the expression selection interface, the processing method of the virtual model further includes:
responsive to a trigger operation of a reacquiring control associated with the emoticon, re-intercepting the picture based on the virtual model;
and determining the re-intercepted picture as an updated expression picture.
By providing the reacquiring control, when the user is dissatisfied with the acquired expression picture, the expression picture can be conveniently and rapidly reacquired, and the man-machine interaction efficiency is improved.
In some embodiments, the processing method of the virtual model further includes:
and deleting the expression selection control and the expression picture in response to the triggering operation of the deletion control associated with the expression picture.
By providing the deletion control, a user can delete the expression selection control which is not wanted to be used through the deletion control, so that the expression selection control and the expression picture which are displayed currently are reduced, the rest expression selection control and the expression picture are convenient to manage, and the man-machine interaction efficiency is improved.
The foregoing fig. 2 illustrates a basic flow of the present disclosure, and the scheme provided in the present disclosure is further described below based on a specific implementation, and fig. 3 is a flowchart illustrating another method for processing a virtual model according to an exemplary embodiment. Taking the electronic device as the terminal 101, the processing method of the virtual model is executed by the terminal, referring to fig. 3, and the method includes:
in step S301, the terminal sends a model creation request to a server based on a layer file input in a model creation interface, where the server is configured to create at least one virtual model based on the layer file carried by the model creation request, add a target tag to the virtual model that presents a custom type expression, and return the at least one virtual model, where the custom type expression is generated by the server based on an expression layer customized by a user in the layer file.
In the embodiment of the disclosure, the terminal installs and runs an application program supporting a virtual model, and the model creation interface is an interface in the application program. The user can input the layer file at the model creation page, and then the terminal sends a model creation request carrying the layer file to the server. The server provides a background service for the application program, and the server is exemplified as the server 102 shown in fig. 1. The server analyzes the model creation request uploaded by the terminal to obtain the layer file, creates at least one virtual model based on the layer file, adds a target label for the virtual model presenting the expression of the user-defined type, and finally returns the at least one virtual model to the terminal. The user-defined expression is generated by the server based on an expression layer user-defined in the layer file, namely, the server can add a target label when generating the user-defined expression for the virtual model. The server also creates a virtual model based on the layer file that presents a default type of expression that is generated by the server based on a default emoji layer in the layer file, or based on a emoji layer set locally by the server, which is not limited in this embodiment of the present application.
The layer file is a file generated by a drawing application program such as photoshop, sai, and includes a plurality of layers. In an embodiment of the present disclosure, the layer file includes an emoticon layer customized by a user. In some embodiments, multiple emoji layers in the layer file form a layer group, i.e. the emoji layers belonging to the same expression are packed into a layer group. It should be noted that the above description of the layer file is an exemplary description, and the content included in the layer file and the division manner of the layer group in the embodiment of the disclosure are not limited. In some embodiments, the layer file also includes body part bitmap layers and the like required to generate the model. The storage paths of different layers in the layer file are different, namely, the layers describing the expression are stored under the same father path, the layers describing the body part are stored under the same father path, and the layers describing the expression are different from the father path where the layers describing the body part are located. For a plurality of layers which belong to the same description expression, the layers describing the same expression are under the same sub-path, and the layers describing different expressions are under different sub-paths. The user-defined emoticons and the non-user-defined emoticons are not limited in this embodiment of the disclosure under the same or different parent paths.
For example, fig. 4 is a schematic diagram of an emoticon layer, according to an exemplary embodiment. As shown in fig. 4, taking a K1 expression included in a layer file as an example, the K1 expression is a custom type expression, a storage path of the K1 expression is a parent path of the K expression, that is, the expression under the path of the K expression is a custom type expression, and the custom type expression further includes a K2 expression, a K3 expression, and the like. Taking a K1 expression as an example, the K1 expression is composed of 4 expression layers: k11, K12, K13 and K14.
In some embodiments, the application further includes a model management interface including a model creation option and an expression management option. The terminal can display the model management interface, and then respond to the triggering operation of the model creation options to display the model creation interface; and responding to the triggering operation of the expression management options, and displaying an expression selection interface. The model creation interface is used to set the relevant parameters for creating the virtual model. The expression selection interface is used for selecting virtual models presenting different expressions. In other words, the user can perform model creation or manage the created virtual model through options provided by the model management interface. Through displaying the model management interface, a user can perform model creation on the model creation interface by triggering the model creation option, or set an expression picture associated with the expression selection control on the expression selection interface by triggering the expression management option.
In some embodiments, the model management interface further includes a surface capture management option, a role management option, an action management option, a background management option, and the like. The facial capture management options are used for triggering and displaying a facial capture management interface, and the facial capture management interface is used for setting relevant parameters of facial capture. The role management options are used for triggering and displaying a role management interface, and the role management interface is used for managing the virtual image corresponding to the user account. The action management option is used for triggering and displaying an action management interface, and the action management interface is used for managing actions presented by the virtual model. The context management option is used for triggering and displaying a context management interface, the context management interface is used for managing the context of the virtual model, and the context comprises: images, video, audio, etc.
For example, FIG. 5 is a schematic diagram of a model creation interface, shown in accordance with an exemplary embodiment. As shown in fig. 5, the user is a host user logged in by the terminal, and the host user triggers the display of the model creation interface by triggering the model creation option on the model management interface. The model creation interface includes a variety of setting options. The anchor user outputs the layer file by triggering the change layer file button. The anchor user triggers the terminal to send a model creation request to the server by triggering a save button. If the terminal receives at least one virtual model returned by the server, the terminal displays 'successful creation'.
In step S302, the terminal performs label detection on at least one virtual model returned by the server, where the label detection is used to detect whether the at least one virtual model has a target label, and the target label is used to indicate that the corresponding virtual model presents a custom type expression.
In the embodiment of the present disclosure, after receiving at least one virtual model returned by the server, the terminal traverses the at least one virtual model and performs label detection, to detect whether the at least one virtual model has a target label, that is, to detect whether a virtual model presenting a custom type expression exists in the at least one virtual model. If none of the at least one virtual model has a target label, executing step S307; if a virtual model with a target label is detected, step S303 is performed. The received virtual models are subjected to label detection, so that whether each virtual model has a target label or not can be determined, further, the expression picture associated with the expression selection control can be set, the detection can be automatically performed without user operation, the processing mode is simple and efficient, and the man-machine interaction efficiency is improved.
It should be noted that, the steps S301 to S302 are optional steps of the processing method of the virtual model provided in the embodiment of the present disclosure, and accordingly, the terminal can also create the virtual model in other manners, such as creation by the terminal, which is not limited in the embodiment of the present disclosure.
In step S303, in response to detecting the virtual model with the target label, the terminal displays a prompt message for prompting that the virtual model with the target label exists, and the expression selection interface needs to be checked.
In the embodiment of the disclosure, when the terminal detects the virtual model with the target label, a prompt message is displayed, wherein the prompt message displays the number of the virtual models with the target label, a first option for viewing the expression selection interface and a second option for ignoring the prompt message. By displaying the prompt information, the user can be prompted that the virtual model with the target label exists currently, and the user needs to view the expression selection interface, so that the user can set the expression picture displayed in the expression selection control corresponding to the virtual model with the target label based on the expression selection interface.
For example, referring to fig. 6, fig. 6 is a schematic diagram illustrating a hint information according to an exemplary embodiment. As shown in fig. 6, after the anchor user clicks the determination button in fig. 5, the terminal performs tag detection on at least one received virtual model, and if 2 virtual models with target tags are detected, the terminal displays a prompt message: "detect that the character has 2 hand-drawn expressions, do it enter the expression management interface to view? A "," view now "option which is a first option, and a" view later "option which is a second option. The anchor user clicks the 'view now' option, the terminal detects a confirmation operation and displays an expression selection interface; the anchor user clicks the "view later" option, and the terminal detects the ignore operation and displays the character management interface, and of course, the terminal can also display other interfaces.
It should be noted that, step S303 is an optional step of the virtual model processing method provided in the embodiment of the present disclosure, and accordingly, the terminal may also directly display the expression selection interface without displaying the prompt information, which is not limited in the embodiment of the present disclosure.
In step S304, in response to the confirmation operation of the prompt message, the terminal displays the expression selection interface, adds an expression selection control to the expression selection interface, and establishes an association relationship between the expression selection control and the virtual model.
In the embodiment of the disclosure, after the terminal detects the confirmation operation on the prompt information, the terminal displays an expression selection interface, one or more expression selection controls are displayed on the expression selection interface, one or more expression selections are in one-to-one correspondence with a plurality of virtual models, and the expressions presented by the virtual models associated with different expression selection controls are different. Taking a virtual model with a target label as an example, the embodiment of the disclosure newly adds an expression selection control on the expression selection interface of the terminal, and establishes an association relationship between the newly added expression selection control and the virtual model with the target label. Correspondingly, for other virtual models with target labels, adding a corresponding number of expression selection controls on the expression selection interface by the terminal, and respectively establishing association relations between the newly added expression selection controls and the other virtual models with the target labels, so that each virtual model with the target labels is associated with the expression selection controls on the expression selection interface.
For example, fig. 7 is a schematic diagram of an expression selection interface shown in accordance with an exemplary embodiment. As shown in fig. 7, after the anchor user clicks the "view now" option in fig. 6, the terminal displays an expression selection interface displaying expression selection controls of expressions such as "no expression", "expression 1", "expression 2", "embarrassing", "subtle", and "laughing". The expressions corresponding to the expression 1 and the expression 2 are expressions of a custom type, and are not related to the expression picture. The expressions corresponding to the other expression selection controls are the expressions of the default type. The expression selection control "expression 1" is associated with a virtual model that presents expression 1.
The terminal can display a rendering window, the rendering window is used for displaying the virtual model, when the terminal displays the expression selection interface, the terminal defaults to select the expression selection control which is 'no expression', and at the moment, the rendering window displays the virtual model which presents no expression.
In step S305, in response to the first trigger operation of the expression selection control, the terminal obtains an expression picture based on the virtual model, where the expression picture is used to display an expression presented by the virtual model.
In the embodiment of the disclosure, since the expression selection control associated with the virtual model with the target label is not associated with the expression picture initially, the user can trigger the expression selection control through the first trigger operation, and correspondingly, when the terminal detects the first trigger operation on the expression selection control, the terminal acquires the expression picture, so that the expression picture can display the expression presented by the virtual model.
In some embodiments, the terminal acquires the expression picture by intercepting a head region of the virtual model. Correspondingly, responding to a first triggering operation of the expression selection control, and displaying a virtual model associated with the expression selection control by the terminal; and then the terminal intercepts the picture containing the head area from the virtual model to obtain the expression picture. By displaying the virtual model and intercepting the picture containing the head area of the virtual model into an expression picture, the expression picture can intuitively reflect the presented expression of the virtual model, so that a user can determine the expression presented by the virtual model associated with the expression selection control according to the expression picture, and the man-machine interaction efficiency is improved.
For example, referring to fig. 8, fig. 8 is a schematic diagram illustrating an example of capturing an expression picture according to an exemplary embodiment. As shown in fig. 8, expression 1 is a squint, after the user clicks the expression selection control "expression 1" in fig. 7, the terminal displays a virtual model presenting the expression of the squint in the rendering window, and then intercepts the head area of the virtual model to obtain an expression picture.
In some embodiments, the emoticon is obtained by the terminal automatically identifying a head region of the virtual model. Correspondingly, the terminal identifies a head region of the target virtual model, and then intercepts a picture containing the head region based on target size information, wherein the target size information is used for indicating the size of the expression picture. Based on the target size information, the head area of the virtual model can be automatically identified and intercepted, so that the expression picture acquisition efficiency is improved.
In some embodiments, the emoticon is obtained by the terminal according to an intercepting operation of the user. Correspondingly, the terminal displays an interception prompt box, wherein the interception prompt box is used for indicating an area to be intercepted; then, the terminal adjusts the relative position of the interception prompt box and the virtual model based on the adjustment operation of the interception prompt box; and responding to the intercepting operation, and intercepting the picture containing the area indicated by the intercepting prompt box by the terminal. The user can also adjust the size of the intercept prompt box, which is not limited by the disclosed embodiments. Through displaying the interception prompt box, a user can adjust the position of the interception prompt box according to requirements, so that the intercepted expression picture is a picture meeting the requirements of the user, the user can determine the expression presented by the virtual model associated with the expression selection control according to the expression picture, and the man-machine interaction efficiency is improved.
In some embodiments, the user is also able to change the emoticons associated with the emotion selection control. Correspondingly, responding to the triggering operation of the re-acquisition control associated with the expression picture, the terminal re-intercepts the picture based on the virtual model, and then determines the re-intercepted picture as an updated expression picture. By providing the reacquiring control, when the user is dissatisfied with the acquired expression picture, the expression picture can be conveniently and rapidly reacquired, and the man-machine interaction efficiency is improved.
In step S306, the terminal displays the expression selection control and the expression picture in association in the expression selection interface.
In the embodiment of the disclosure, after the terminal acquires the expression picture, the expression picture is displayed in the expression selection control, and the expression picture is displayed in association with the expression selection control. Wherein the associated display includes, but is not limited to: and displaying the expression picture and the expression selection control adjacently, or displaying the expression picture in the expression selection control, or displaying the expression selection control on the expression picture. Through the associated display of the expression selection control and the expression picture, the user can conveniently and rapidly view and manage the virtual model based on the expression picture associated with the expression selection control.
For example, fig. 9 is a schematic diagram showing a display of an emoticon according to an exemplary embodiment. As shown in fig. 9, the terminal displays the obtained expression picture in the expression selection control of "expression 1".
In step S307, in response to the second trigger operation of the expression selection control, the terminal displays a parameter setting interface for setting a presentation parameter of the dynamic expression, where the presentation parameter includes at least one of a playing speed of a single layer, a number of cycles of the multiple layers, and a cycle interval.
In the embodiment of the disclosure, the expression of the custom type presented by the virtual model is a dynamic expression composed of static expressions contained in a plurality of layers. The user can trigger the display parameter setting interface through the second triggering operation, so that the presentation parameters of the dynamic expression are set on the parameter setting interface. Correspondingly, the terminal determines target presentation parameters of the expression presented by the virtual model based on the parameter setting operation of the parameter setting interface. Wherein the second triggering operation is a right click operation, or a drag operation, or a double click operation, which is not limited by the embodiments of the present disclosure. Through the display parameter setting interface, a user can set presentation parameters of the expression based on the parameter setting interface, the setting mode is simple and convenient, and the human-computer interaction efficiency is high.
For example, referring to fig. 10, fig. 10 is a schematic diagram of a parameter setting interface, according to an exemplary embodiment. As shown in fig. 10, the anchor user performs a right click operation on the expression selection control "expression 1", and the terminal displays a parameter setting interface, where the parameter setting interface displays three presentation parameters including a play speed, a circulation number and a circulation interval, and determines a target presentation parameter set by the user based on the parameter setting operation on the parameter setting interface. It should be noted that, the parameter setting interface further includes a preview expression option for previewing the dynamic expression presented based on the presentation parameter and a restore default option for clearing the parameter set by the user at the parameter setting interface.
In some embodiments, the terminal can store the target presentation parameters set based on the parameter setting interface, and then call the target presentation parameters based on a third trigger operation on the expression selection control when the user starts live broadcast.
In step S308, in response to the third trigger operation of the expression selection control, the terminal displays a virtual model presenting a dynamic expression on the live interface based on the target presentation parameter.
In the embodiment of the present disclosure, in the live broadcast process, the user may trigger the expression selection control through a third trigger operation, and then the terminal displays, on the live broadcast interface, a virtual model presenting the dynamic expression described above based on the target presentation parameter determined in step S307, so as to achieve the effect of switching the expression presented by the virtual model.
In some embodiments, the terminal is also able to delete the expression selection control, i.e. not display the expression selection control any more. And deleting the expression selection control and the expression picture in response to the triggering operation of the deletion control associated with the expression selection picture. Through providing the deletion control, the user can delete the expression selection control and the expression picture which are not wanted to be used through the deletion control, so that the expression selection control and the expression picture which are currently displayed are reduced, the management of the rest expression selection control and the expression picture is facilitated, and the man-machine interaction efficiency is improved.
In some embodiments, the application program supporting the virtual model is a plug-in of a live program, such as a live assistant. The live assistant is used to set up a virtual model for live. The user can select any virtual model based on the live helper, which is sent to the live program for display by the live helper.
It should be noted that, the above step S307 and step S308 are optional steps of the virtual model management method provided by the implementation of the present disclosure, and accordingly, the terminal may not perform the above step S307 and step S308, or perform one of the steps, for example, when the expression presented by the virtual model is a static expression, the above step S307 is not performed. In addition, alternative implementations of the steps described above may be combined freely, and the embodiments of the present disclosure are not limited thereto.
According to the scheme provided by the embodiment of the disclosure, the expression selection control associated with the virtual model presenting the user-defined expression is added to the expression selection interface, so that after the expression selection control is triggered, an expression picture can be displayed in the expression selection interface, a user can determine the expression presented by the virtual model based on the expression picture, the user can conveniently and rapidly view and manage the virtual model, and the man-machine interaction efficiency is improved.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
FIG. 11 is a block diagram of a processing device of a virtual model, according to an example embodiment. Referring to fig. 11, the apparatus includes: a relationship establishing unit 1101, a picture acquiring unit 1102, and a first display unit 1103.
A relationship establishing unit 1101 configured to perform adding an expression selection control on an expression selection interface in response to detecting a virtual model having a target label, and establish an association relationship between the expression selection control and the virtual model, where the target label is used to instruct the virtual model to present a custom type expression;
a picture obtaining unit 1102, configured to perform a first trigger operation in response to the expression selection control, and obtain an expression picture based on the virtual model, where the expression picture is used for displaying an expression presented by the virtual model;
the first display unit 1103 is configured to perform associated display of the expression selection control and the expression picture in the expression selection interface.
According to the device provided by the embodiment of the disclosure, the expression selection control associated with the virtual model presenting the user-defined expression is added to the expression selection interface, so that after the expression selection control is triggered, an expression picture can be displayed in the expression selection interface, a user can determine the expression presented by the virtual model based on the expression picture, the user can conveniently and rapidly view and manage the virtual model, and the man-machine interaction efficiency is improved.
In some embodiments, referring to fig. 12, fig. 12 is a block diagram of a processing apparatus of another virtual model, the picture acquisition unit 1102, according to an exemplary embodiment, includes:
a display subunit 11021 configured to perform a display of the virtual model in response to the first trigger operation to the expression selection control;
a clipping subunit 11022 is configured to perform clipping of the picture including the head region from the virtual model to obtain the expression picture.
In some embodiments, referring to fig. 12, the intercept subunit 11022 is configured to perform identification of a head region of the virtual model; based on target size information, capturing a picture containing the head region, wherein the target size information is used for indicating the size of the expression picture.
In some embodiments, referring to fig. 12, the interception subunit 11022 is configured to perform displaying an interception prompt box, where the interception prompt box is used to indicate an area to be intercepted; based on the adjustment operation of the interception prompt box, adjusting the relative position of the interception prompt box and the virtual model; and responding to the intercepting operation, intercepting the picture containing the area indicated by the intercepting prompt box.
In some embodiments, the custom type of expression is a dynamic expression consisting of static expressions contained by multiple layers;
referring to fig. 12, the processing apparatus of the virtual model further includes:
a second display unit 1104 configured to perform a second trigger operation for the expression selection control, display a parameter setting interface for setting presentation parameters of the dynamic expression, the presentation parameters including at least one of a play speed of a single layer, a number of cycles of the plurality of layers, and a cycle interval;
a parameter determining unit 1105 configured to perform a parameter setting operation based on the expression presented at the parameter setting interface to determine a target presentation parameter of the expression presented by the virtual model.
In some embodiments, referring to fig. 12, the second display unit 1104 is further configured to perform a third trigger operation in response to the expression selection control, and display the virtual model presenting the dynamic expression on a live interface based on the target presentation parameter.
In some embodiments, referring to fig. 12, the processing device of the virtual model further includes:
a third display unit 1106, configured to perform displaying a prompt message, where the prompt message is used to prompt that the virtual model exists, and the expression selection interface needs to be checked;
The third display unit 1106 is further configured to perform a confirmation operation in response to the prompt message, displaying the expression selection interface.
In some embodiments, referring to fig. 12, the processing device of the virtual model further includes:
a request sending unit 1107 configured to perform a layer file input on a model creation interface, send a model creation request to a server, where the server is configured to create at least one virtual model based on the layer file carried by the model creation request, add the target tag to the virtual model presenting the expression of the custom type, and return the at least one virtual model, where the expression of the custom type is generated by the server based on the expression layer custom defined by the user in the layer file;
the tag detection unit 1108 is configured to perform tag detection on at least one virtual model returned by the server, where the tag detection is used to detect whether the at least one virtual model has the target tag.
In some embodiments, referring to fig. 12, the processing device of the virtual model further includes:
a fourth display unit 1109 configured to execute a display model management interface including a model creation option and an expression management option;
The third display unit 1106 is further configured to perform a triggering operation in response to the model creation option, display the model creation interface;
the third display unit 1106 is further configured to perform a triggering operation in response to the expression management option, and display the expression selection interface.
In some embodiments, referring to fig. 12, the picture acquisition unit 1102 is configured to perform a re-capture of a picture based on the virtual model in response to a trigger operation of a re-capture control associated with the expressive picture; and determining the re-intercepted picture as an updated expression picture.
In some embodiments, referring to fig. 12, the apparatus further comprises:
and a control management unit 1110 configured to perform a trigger operation in response to the deletion control associated with the emoticon, to delete the emoticon and the emoticon.
It should be noted that, when the processing apparatus for a virtual model provided in the foregoing embodiment processes the virtual model, the division of the functional units is exemplified, and in practical application, the foregoing functional allocation may be performed by different functional units according to needs, that is, the internal structure of the electronic device is divided into different functional units, so as to complete all or part of the functions described above. In addition, the processing device of the virtual model provided in the above embodiment and the processing method embodiment of the virtual model belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not repeated here.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
When the electronic device is provided as a terminal, fig. 13 is a block diagram of a terminal according to an exemplary embodiment. The terminal fig. 13 shows a block diagram of a terminal 1300 provided by an exemplary embodiment of the present disclosure. The terminal 1300 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1300 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 1300 includes: a processor 1301, and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Processor 1301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 1301 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, processor 1301 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required to be displayed by the display screen. In some embodiments, the processor 1301 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. Memory 1302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one program code for execution by processor 1301 to implement the methods of processing a virtual model provided by the method embodiments in the present disclosure.
In some embodiments, the terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. The processor 1301, the memory 1302, and the peripheral interface 1303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the peripheral device interface 1303 through a bus, a signal line, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, a display screen 1305, a camera assembly 1306, audio circuitry 1307, a positioning assembly 1308, and a power supply 1309.
A peripheral interface 1303 may be used to connect I/O (Input/Output) related at least one peripheral to the processor 1301 and the memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1301, the memory 1302, and the peripheral interface 1303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal to an electromagnetic signal for transmission, or converts a received electromagnetic signal to an electrical signal. Optionally, the radio frequency circuit 1304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication, short range wireless communication) related circuits, which are not limited by the present disclosure.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1305 is a touch display, the display 1305 also has the ability to capture touch signals at or above the surface of the display 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 1305 may be one, providing the front panel of the terminal 1300; in other embodiments, the display 1305 may be at least two, disposed on different surfaces of the terminal 1300 or in a folded configuration; in still other embodiments, the display 1305 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1300. Even more, the display screen 1305 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1305 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be provided at different portions of the terminal 1300, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 1301 or the radio frequency circuit 1304 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1307 may also comprise a headphone jack.
The location component 1308 is used to locate the current geographic location of the terminal 1300 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1308 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, the grainer system of russia, or the galileo system of the european union.
A power supply 1309 is used to power the various components in the terminal 1300. The power supply 1309 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1309 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyroscope sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. Processor 1301 may control display screen 1305 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by acceleration sensor 1311. The acceleration sensor 1311 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1312 may detect a body direction and a rotation angle of the terminal 1300, and the gyro sensor 1312 may collect a 3D motion of the user on the terminal 1300 in cooperation with the acceleration sensor 1311. Processor 1301 can implement the following functions based on the data collected by gyro sensor 1312: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side frame of terminal 1300 and/or below display screen 1305. When the pressure sensor 1313 is disposed at a side frame of the terminal 1300, a grip signal of the terminal 1300 by a user may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1313. When the pressure sensor 1313 is disposed at the lower layer of the display screen 1305, the processor 1301 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1305. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1314 is used to collect a fingerprint of the user, and the processor 1301 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by processor 1301 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical key or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical key or vendor Logo.
The optical sensor 1315 is used to collect ambient light intensity. In one embodiment, processor 1301 may control the display brightness of display screen 1305 based on the intensity of ambient light collected by optical sensor 1315. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1305 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1305 is turned down. In another embodiment, processor 1301 may also dynamically adjust the shooting parameters of camera assembly 1306 based on the intensity of ambient light collected by optical sensor 1315.
A proximity sensor 1316, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1300. The proximity sensor 1316 is used to collect the distance between the user and the front of the terminal 1300. In one embodiment, when proximity sensor 1316 detects a gradual decrease in the distance between the user and the front of terminal 1300, processor 1301 controls display screen 1305 to switch from a bright screen state to a inactive screen state; when the proximity sensor 1316 detects that the distance between the user and the front surface of the terminal 1300 gradually increases, the processor 1301 controls the display screen 1305 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 13 is not limiting of terminal 1300 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a storage medium including a program code, for example, a memory 1302 including a program code executable by a processor 1301 of a terminal 1300 to complete the processing method of the virtual model is also provided. In some embodiments, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising computer instructions which, when executed by a processor, implement the above-described method of processing a virtual model.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited by the appended claims.

Claims (23)

1. A method of processing a virtual model, the method comprising:
in response to detecting a virtual model with a target label, adding an expression selection control to an expression selection interface, and establishing an association relationship between the expression selection control and the virtual model, wherein the target label is used for indicating the virtual model to present a self-defined expression, and the self-defined expression is a dynamic expression formed by static expressions contained in a plurality of image layers; before adding the new expression selection control, displaying a plurality of other expression selection controls on the expression selection interface, wherein the plurality of other expression selection controls are in one-to-one correspondence with a plurality of virtual models presenting different expressions;
responding to a first triggering operation of the expression selection control, and acquiring an expression picture based on the virtual model, wherein the expression picture is used for displaying the expression presented by the virtual model;
The expression selection control and the expression picture are displayed in a correlated mode in the expression selection interface;
responding to a second triggering operation of the expression selection control, and displaying a parameter setting interface; determining target presentation parameters of the expression presented by the virtual model based on parameter setting operation at the parameter setting interface;
and responding to a third triggering operation of the expression selection control, and displaying the virtual model presenting the dynamic expression on a live interface based on the target presentation parameter.
2. The method according to claim 1, wherein the obtaining an expression picture based on the virtual model in response to the first trigger operation of the expression selection control includes:
responding to the first triggering operation of the expression selection control, and displaying the virtual model;
and capturing a picture containing the head area from the virtual model to obtain the expression picture.
3. The method for processing the virtual model according to claim 2, wherein capturing a picture including a head region from the virtual model includes:
identifying a head region of the virtual model;
And based on target size information, capturing a picture containing the head region, wherein the target size information is used for indicating the size of the expression picture.
4. The method for processing the virtual model according to claim 2, wherein capturing a picture including a head region from the virtual model includes:
displaying an interception prompt box, wherein the interception prompt box is used for indicating an area to be intercepted;
based on the adjustment operation of the interception prompt box, adjusting the relative position of the interception prompt box and the virtual model;
and responding to the intercepting operation, intercepting the picture containing the area indicated by the intercepting prompt box.
5. The method according to claim 1, wherein the parameter setting interface is configured to set presentation parameters of the dynamic expression, the presentation parameters including at least one of a playback speed of a single layer, a number of cycles of the plurality of layers, and a cycle interval.
6. The method for processing a virtual model according to claim 1, wherein before adding the expression selection control to the expression selection interface, the method for processing a virtual model further comprises:
Displaying prompt information, wherein the prompt information is used for prompting the existence of the virtual model and the expression selection interface is required to be checked;
and responding to the confirmation operation of the prompt information, and displaying the expression selection interface.
7. The method for processing a virtual model according to claim 1, wherein the method for processing a virtual model further comprises:
based on a layer file input in a model creation interface, sending a model creation request to a server, wherein the server is used for creating at least one virtual model based on the layer file carried by the model creation request, adding the target label for the virtual model presenting the expression of the custom type, and returning the at least one virtual model, wherein the form of the custom type is generated by the server based on the expression layer customized by a user in the layer file;
and performing label detection on at least one virtual model returned by the server, wherein the label detection is used for detecting whether the at least one virtual model has the target label.
8. The method for processing a virtual model according to claim 7, further comprising:
Displaying a model management interface, wherein the model management interface comprises a model creation option and an expression management option;
responding to the triggering operation of the model creation options, and displaying the model creation interface;
and responding to the triggering operation of the expression management options, and displaying the expression selection interface.
9. The method according to claim 1, wherein after the expression selection control and the expression picture are displayed in association in the expression selection interface, the method further comprises:
responsive to a trigger operation of a reacquire control associated with the emoticon, re-intercepting the picture based on the virtual model;
and determining the re-intercepted picture as an updated expression picture.
10. The method for processing a virtual model according to claim 1, wherein the method for processing a virtual model further comprises:
and deleting the expression selection control and the expression picture in response to triggering operation of the deletion control associated with the expression picture.
11. A processing apparatus for a virtual model, the processing apparatus for a virtual model comprising:
A relation establishing unit configured to perform adding an expression selection control on an expression selection interface in response to detection of a virtual model with a target label, and establish an association relation between the expression selection control and the virtual model, wherein the target label is used for indicating the virtual model to present a custom type expression, and the custom type expression is a dynamic expression formed by static expressions contained in a plurality of image layers; before adding the new expression selection control, displaying a plurality of other expression selection controls on the expression selection interface, wherein the plurality of other expression selection controls are in one-to-one correspondence with a plurality of virtual models presenting different expressions;
a picture acquisition unit configured to perform a first trigger operation in response to the expression selection control, and acquire an expression picture based on the virtual model, the expression picture being used for displaying an expression presented by the virtual model;
a first display unit configured to perform associated display of the expression selection control and the expression picture in the expression selection interface;
a second display unit configured to perform a display of a parameter setting interface in response to a second trigger operation on the expression selection control;
A parameter determination unit configured to perform a parameter setting operation based on the parameter setting interface, to determine a target presentation parameter of the expression presented by the virtual model;
the second display unit is further configured to perform a third trigger operation in response to the expression selection control, and display the virtual model presenting the dynamic expression on a live interface based on the target presentation parameter.
12. The apparatus according to claim 11, wherein the picture acquisition unit includes:
a display subunit configured to perform displaying the virtual model in response to the first trigger operation on the expression selection control;
and the interception subunit is configured to intercept the picture containing the head area from the virtual model to obtain the expression picture.
13. The processing apparatus of the virtual model according to claim 12, wherein the intercept subunit is configured to perform identifying a header region of the virtual model; and based on target size information, capturing a picture containing the head region, wherein the target size information is used for indicating the size of the expression picture.
14. The processing apparatus of the virtual model according to claim 12, wherein the interception subunit is configured to perform displaying an interception prompt box for indicating an area to be intercepted; based on the adjustment operation of the interception prompt box, adjusting the relative position of the interception prompt box and the virtual model; and responding to the intercepting operation, intercepting the picture containing the area indicated by the intercepting prompt box.
15. The apparatus according to claim 11, wherein the parameter setting interface is configured to set presentation parameters of the dynamic expression, the presentation parameters including at least one of a playback speed of a single layer, a number of cycles of the plurality of layers, and a cycle interval.
16. The apparatus for processing a virtual model according to claim 11, wherein the apparatus for processing a virtual model further comprises:
the third display unit is configured to display prompt information, wherein the prompt information is used for prompting the existence of the virtual model and the expression selection interface is required to be checked;
the third display unit is further configured to perform a confirmation operation in response to the prompt information, and display the expression selection interface.
17. The apparatus for processing a virtual model according to claim 16, wherein the apparatus for processing a virtual model further comprises:
a request sending unit, configured to execute a layer file input on a model creation interface, and send a model creation request to a server, where the server is configured to create at least one virtual model based on the layer file carried by the model creation request, add the target tag to the virtual model presenting the expression of the custom type, and return the at least one virtual model, where the form of the custom type is generated by the server based on the expression layer custom defined by the user in the layer file;
and the label detection unit is configured to perform label detection on at least one virtual model returned by the server, wherein the label detection is used for detecting whether the at least one virtual model has the target label.
18. The apparatus for processing a virtual model according to claim 17, wherein the apparatus for processing a virtual model further comprises:
a fourth display unit configured to perform display of a model management interface including a model creation option and an expression management option;
The third display unit is further configured to perform a trigger operation in response to the model creation option, and display the model creation interface;
the third display unit is further configured to perform a trigger operation in response to the expression management option, and display the expression selection interface.
19. The processing apparatus of the virtual model according to claim 11, wherein the picture acquisition unit is configured to perform re-capture of a picture based on the virtual model in response to a trigger operation of a re-capture control associated with the emoticon; and determining the re-intercepted picture as an updated expression picture.
20. The apparatus for processing a virtual model according to claim 11, wherein the apparatus for processing a virtual model further comprises:
and the control management unit is configured to execute a trigger operation for responding to the deletion control associated with the expression picture and delete the expression selection control and the expression picture.
21. An electronic device, the electronic device comprising:
one or more processors;
a memory for storing the processor-executable program code;
Wherein the processor is configured to execute the program code to implement the method of processing a virtual model as claimed in any one of claims 1 to 10.
22. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of processing a virtual model according to any one of claims 1 to 10.
23. A computer program product comprising computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 10.
CN202110767415.1A 2021-07-07 2021-07-07 Virtual model processing method and device, electronic equipment and storage medium Active CN113485596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110767415.1A CN113485596B (en) 2021-07-07 2021-07-07 Virtual model processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110767415.1A CN113485596B (en) 2021-07-07 2021-07-07 Virtual model processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113485596A CN113485596A (en) 2021-10-08
CN113485596B true CN113485596B (en) 2023-12-22

Family

ID=77940825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110767415.1A Active CN113485596B (en) 2021-07-07 2021-07-07 Virtual model processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113485596B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114047851B (en) * 2021-11-15 2024-02-06 北京字跳网络技术有限公司 Expression processing method and device, electronic equipment, storage medium and product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657651A (en) * 2017-08-28 2018-02-02 腾讯科技(上海)有限公司 Expression animation generation method and device, storage medium and electronic installation
CN110827378A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Virtual image generation method, device, terminal and storage medium
CN111612876A (en) * 2020-04-27 2020-09-01 北京小米移动软件有限公司 Expression generation method and device and storage medium
WO2021004114A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Automatic meme generation method and apparatus, computer device and storage medium
CN112270735A (en) * 2020-10-27 2021-01-26 北京达佳互联信息技术有限公司 Virtual image model generation method and device, electronic equipment and storage medium
CN112598785A (en) * 2020-12-25 2021-04-02 游艺星际(北京)科技有限公司 Method, device and equipment for generating three-dimensional model of virtual image and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657651A (en) * 2017-08-28 2018-02-02 腾讯科技(上海)有限公司 Expression animation generation method and device, storage medium and electronic installation
WO2021004114A1 (en) * 2019-07-05 2021-01-14 深圳壹账通智能科技有限公司 Automatic meme generation method and apparatus, computer device and storage medium
CN110827378A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Virtual image generation method, device, terminal and storage medium
CN111612876A (en) * 2020-04-27 2020-09-01 北京小米移动软件有限公司 Expression generation method and device and storage medium
CN112270735A (en) * 2020-10-27 2021-01-26 北京达佳互联信息技术有限公司 Virtual image model generation method and device, electronic equipment and storage medium
CN112598785A (en) * 2020-12-25 2021-04-02 游艺星际(北京)科技有限公司 Method, device and equipment for generating three-dimensional model of virtual image and storage medium

Also Published As

Publication number Publication date
CN113485596A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN110971930B (en) Live virtual image broadcasting method, device, terminal and storage medium
CN110278464B (en) Method and device for displaying list
CN113204298B (en) Method and device for displaying release progress, electronic equipment and storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
CN113098755B (en) Group chat creating method, device, terminal and storage medium
CN110740340B (en) Video live broadcast method and device and storage medium
CN112751679B (en) Instant messaging message processing method, terminal and server
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN113411680A (en) Multimedia resource playing method, device, terminal and storage medium
CN112163406A (en) Interactive message display method and device, computer equipment and storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN111949879A (en) Method and device for pushing message, electronic equipment and readable storage medium
CN111192072A (en) User grouping method and device and storage medium
CN111064657B (en) Method, device and system for grouping concerned accounts
CN113485596B (en) Virtual model processing method and device, electronic equipment and storage medium
CN111275607A (en) Interface display method and device, computer equipment and storage medium
CN114554112B (en) Video recording method, device, terminal and storage medium
CN114143280B (en) Session display method and device, electronic equipment and storage medium
CN111464829B (en) Method, device and equipment for switching media data and storage medium
CN111314205B (en) Instant messaging matching method, device, system, equipment and storage medium
CN110941458B (en) Method, device, equipment and storage medium for starting application program
CN110942426B (en) Image processing method, device, computer equipment and storage medium
CN109618018B (en) User head portrait display method, device, terminal, server and storage medium
CN113064537B (en) Media resource playing method, device, equipment, medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant