CN115760628A - Head portrait display method and device - Google Patents

Head portrait display method and device Download PDF

Info

Publication number
CN115760628A
CN115760628A CN202211490527.8A CN202211490527A CN115760628A CN 115760628 A CN115760628 A CN 115760628A CN 202211490527 A CN202211490527 A CN 202211490527A CN 115760628 A CN115760628 A CN 115760628A
Authority
CN
China
Prior art keywords
head portrait
avatar
display
frame
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211490527.8A
Other languages
Chinese (zh)
Inventor
王昄月
王颖奇
刘世杰
尹蓓
胡虹
方方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202211490527.8A priority Critical patent/CN115760628A/en
Publication of CN115760628A publication Critical patent/CN115760628A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a head portrait display method and a head portrait display device, wherein the head portrait display method comprises the following steps: acquiring a head portrait material to be displayed, and determining at least two head portrait layers corresponding to the head portrait material; acquiring a current offset parameter of the mobile terminal, and determining dynamic parameters of at least two head portrait layers according to the current offset parameter, wherein the dynamic parameters of different head portrait layers are different; and dynamically displaying the head portrait materials in the head portrait frame according to the dynamic parameters of the at least two head portrait image layers. Therefore, layers are divided for the head portrait materials, rendering is carried out based on the dynamic parameters of each layer, the effect that the head portrait materials dynamically change along with the deviation of the mobile terminal can be achieved in the head portrait frame, an interaction mode of the head portrait is provided, and more special display effects are provided for users.

Description

Head portrait display method and device
Technical Field
The application relates to the technical field of image processing, in particular to a head portrait display method. The application also relates to a head portrait display device, a computing device and a computer readable storage medium.
Background
With the rapid development of computer technology and internet technology, in many scenarios, it is related to representing the avatar of the user in the form of an avatar, such as avatar display in an application program, avatar display of a personal data introduction page, avatar display of a check-in scenario, avatar display in a friend list, access to dynamic avatar display, and the like.
In the prior art, users often select or upload head portrait pictures of the users, cut the pictures into the shape of a head portrait frame, and then embed the cut pictures into the head portrait frame for display. However, in the head portrait display method, only the head portrait is displayed in the set head portrait frame in the form of the picture, the display form is relatively fixed and single, the display effect is poor, and the user experience is greatly influenced.
Disclosure of Invention
In view of this, the present application provides a method for displaying a head portrait. The application also relates to a head portrait display device, a computing device and a computer readable storage medium, which are used for solving the technical problems that the display form is fixed and single, the display effect is poor and the user experience is influenced in the prior art.
According to a first aspect of the embodiments of the present application, there is provided an avatar display method, including:
acquiring a head portrait material to be displayed, and determining at least two head portrait layers corresponding to the head portrait material;
acquiring a current offset parameter of the mobile terminal, and determining dynamic parameters of at least two head portrait layers according to the current offset parameter, wherein the dynamic parameters of different head portrait layers are different;
and dynamically displaying the head portrait materials in the head portrait frame according to the dynamic parameters of the at least two head portrait image layers.
According to a second aspect of the embodiments of the present application, there is provided an avatar display apparatus, including:
the first determining module is configured to acquire a head portrait material to be displayed and determine at least two head portrait layers corresponding to the head portrait material;
the second determining module is configured to acquire a current offset parameter of the mobile terminal and determine dynamic parameters of at least two head portrait layers according to the current offset parameter, wherein the dynamic parameters of different head portrait layers are different;
and the display module is configured to dynamically display the head portrait materials in the head portrait frame according to the dynamic parameters of the at least two head portrait image layers.
According to a third aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions to implement the operational steps of any of the avatar rendering methods described above.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of any of the avatar rendering methods described above.
The method for displaying the head portrait comprises the steps of obtaining a head portrait material to be displayed, and determining at least two head portrait image layers corresponding to the head portrait material; acquiring a current offset parameter of the mobile terminal, and determining dynamic parameters of at least two head portrait layers according to the current offset parameter, wherein the dynamic parameters of different head portrait layers are different; and dynamically displaying the head portrait materials in the head portrait frame according to the dynamic parameters of the at least two head portrait image layers. In this case, after the head portrait material is obtained, the head portrait material may be divided into at least two head portrait layers, a dynamic parameter of each head portrait layer is determined based on a current offset parameter of the mobile terminal, and rendering and displaying are performed in a head portrait frame based on the dynamic parameter of each head portrait layer, so that the obtained head portrait material is dynamically displayed in the head portrait frame. So, to head picture element material division picture layer, render up based on the dynamic parameter on every picture layer, can realize head picture material along with mobile terminal's skew and dynamic change's effect in head picture frame, provide the interactive mode of a head picture, provide more special bandwagon effect for the user, and can self-define the dynamic parameter of a plurality of head picture layer developments show, richened the show form, promoted user experience greatly.
Drawings
Fig. 1 is a flowchart of an avatar displaying method according to an embodiment of the present application;
fig. 2a is a schematic flowchart illustrating a configuration of special effect parameters according to an embodiment of the present application;
FIG. 2b is a schematic interface diagram illustrating configuration of special effect parameters according to an embodiment of the present application;
FIG. 2c is a schematic view of a first avatar display page according to an embodiment of the present application;
FIG. 2d is a schematic view of a second avatar display page according to an embodiment of the present application;
FIG. 2e is a schematic view of a third avatar display page according to an embodiment of the present application;
FIG. 2f is a schematic view of a fourth avatar display page provided in an embodiment of the present application;
fig. 2g is a schematic flow chart of adding a head portrait pendant according to an embodiment of the present application;
FIG. 2h is a schematic flow chart of replacing the head portrait according to an embodiment of the present application;
fig. 3 is a flowchart illustrating an avatar rendering method applied in a video platform according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a head portrait display apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present application. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
First, the noun terms referred to in one or more embodiments of the present application are explained.
Digital collection: the digital collection is a unique digital certificate generated by using a block chain technology and corresponding to a specific work and an artwork, and realizes real and credible digital distribution, purchase, collection and use on the basis of protecting the digital copyright of the digital collection.
A gyroscope: a gyroscope is an angular motion detection device using a moment-of-momentum sensitive housing of a high-speed rotating body about one or two axes orthogonal to the axis of rotation with respect to the inertial space. Angular motion detection devices made using other principles are also known as gyroscopes, which serve the same function.
webp: the picture file format simultaneously provides lossy compression and lossless compression (reversible compression), and supports lossy and lossless compression, ICC color configuration, XMP metadata and Alpha transparent channels.
It should be noted that, the user can use the digital collection owned by the user as the avatar of the user in the interactive platform, the avatar is displayed only in the form of pictures at present, and the rights and interests of the digital assets owned by the user and the digital periphery purchased by the user have a large space for promotion. Because the interaction of the digital collection serving as the user head portrait is single at present and lacks interactivity, the digital collection with the gyroscope effect is introduced in the embodiment of the application, namely the digital collection serving as the user head portrait can be dynamically changed based on the parameters of the mobile terminal gyroscope, the playing method of the digital collection serving as the user head portrait is increased, the interestingness of the digital collection is improved, namely more rights and interests of a user are provided for the user by means of programmable digital collection materials and special display effects, and more interactive playing methods are provided. In addition, the method for layer layering manufacturing and rendering is combined in the embodiment of the application, and a more novel display effect and interactive playing method of the multi-layer picture in a user side application platform are provided.
The present application provides a method for displaying a head portrait, and the present application also relates to a head portrait display apparatus, a computing device, and a computer readable storage medium, which are individually detailed in the following embodiments.
Fig. 1 shows a flowchart of an avatar display method according to an embodiment of the present application, which specifically includes the following steps:
step 102: acquiring a head portrait material to be displayed, and determining at least two head portrait layers corresponding to the head portrait material.
Specifically, the avatar material may refer to a multimedia resource that is used as a user avatar in the application platform and represents a user image, and the avatar material may be a picture, a video, a sequence frame animation, and the like.
In practical application, a user can select the head portrait material which the user wants to show from the selectable materials provided by the application platform; or the user can upload the head portrait materials purchased, downloaded or obtained in other manners, for example, the head portrait materials can be digital collections purchased by the user from a sales platform, and for example, the head portrait materials can be pictures or video resources downloaded by the user from a picture or video webpage.
It should be noted that after the head portrait material to be displayed is obtained, the head portrait material may be divided to obtain at least two head portrait layers corresponding to the head portrait material, and subsequently, parameter configuration may be performed on each layer to customize the display effect of each head portrait layer, improve flexibility of head portrait display, and further improve user experience.
The types of the head portrait materials are different, and the modes for obtaining the at least two head portrait layers corresponding to the head portrait materials are also different, so that when the at least two head portrait layers corresponding to the head portrait materials are determined, the head portrait materials can be divided based on the types of the head portrait materials to obtain the at least two head portrait layers.
In an optional implementation manner of this embodiment, when the avatar material is a video, at least two avatar layers corresponding to the avatar material are determined, and a specific implementation process may be as follows:
intercepting a head portrait material according to a set interval to obtain at least two video image frames;
and determining at least two head portrait layers corresponding to the video image frames for each video image frame.
Specifically, the set interval is a preset time interval, and the set interval may represent a screenshot frequency, for example, the set interval is 0.1 second, 0.2 second, 1 second, and the like.
It should be noted that, if the avatar material is a video, at least two frames of video image frames may be first intercepted from the avatar material according to a set interval, and at least two corresponding avatar layers are obtained by splitting each obtained video image frame, where the at least two avatar layers may include a background layer and at least one display object layer. Subsequently, for each video image frame, the dynamic parameters of at least two corresponding head portrait image layers can be configured, so that when the current video image frame is played in the head portrait frame, different image layers of the video image frame can dynamically change along with the offset of the mobile terminal, and a corresponding dynamic effect is displayed.
In practical application, at least two video image frames may be intercepted from a video, at least two corresponding avatar layers are determined for each video image frame, and then dynamic parameters (including moving amplitude and/or transparency) of the at least two avatar layers may be configured, so as to introduce multiple video image frames in the video into an avatar frame, thereby presenting a dual dynamic display effect over time and offset with a mobile terminal. The video can be in various formats such as a webp format and an svga format.
For example, assuming that the avatar material is a 15-second video, the interval is set to 1 second, and at this time, one video image frame may be intercepted from the avatar material every 1 second, and 15 video image frames may be obtained in total, and for each video image frame, the video image frame may be split into at least two avatar layers based on a layer technology.
In practical application, the sequence frame animation is similar to a video, and if the header material is the sequence frame animation, each frame in the sequence frame animation can be split into at least two header image layers.
In an optional implementation manner of this embodiment, when the avatar material is a picture, at least two avatar layers corresponding to the avatar material are determined, and a specific implementation process may be as follows:
splitting the head portrait material into at least two head portrait layers, wherein the at least two head portrait layers comprise a background layer and at least one display object layer.
It should be noted that, if the avatar material is a picture, the avatar material may be split into at least two avatar layers directly based on a layer technology, where the at least two avatar layers include a background layer and at least one display object layer, the display object layer includes at least one display object, and the display object may be a person, a plant, an object, an environment (such as the sun, the moon, and a star), and the like.
In practical application, the head portrait material is split into at least two head portrait layers, the background layer and the display object layer can be split, different dynamic parameters can be configured for the background layer and the display object layer subsequently, and if the background layer is kept unchanged, the display object layer can move left and right along with the movement of the mobile terminal. Therefore, dynamic parameters of different image layers in the head portrait material can be customized, so that different image layers have different dynamic effects, display forms are enriched, and user experience is greatly improved.
In an example, assuming that the head portrait material is a picture, the picture is split to obtain a background layer and 3 display object layers, where the background layer is a mountain background, the display object layer 1 includes a sun and a tree, the display object layer 2 includes a virtual character, and the display object layer 3 includes a puppy.
Step 104: acquiring a current offset parameter of the mobile terminal, and determining dynamic parameters of at least two head portrait layers according to the current offset parameter, wherein the dynamic parameters of different head portrait layers are different.
Specifically, the current offset parameter may refer to a current offset direction and offset magnitude of the mobile terminal, such as moving the mobile terminal 1 unit to the left, moving the mobile terminal one unit to the right, and rotating the mobile terminal 15 degrees to the left and the bottom. In practical application, the current offset parameter of the mobile terminal is obtained, and may be a parameter of a gyroscope arranged on the mobile terminal, and the gyroscope may detect a current offset direction and an offset angle of the mobile terminal.
It should be noted that after at least two head portrait layers corresponding to the head portrait material are obtained, the current offset parameter of the mobile terminal may be obtained, and according to the current offset parameter, the dynamic parameters of the at least two head portrait layers are configured, and the dynamic parameters of different head portrait layers may be set to be different, so that different layers may have different dynamic effects along with the movement of the mobile terminal, and in order to enrich the display form, provide the interactive mode of the head portrait for the user, and provide more special display effects.
In an optional implementation manner of this embodiment, determining the dynamic parameters of at least two head image layers according to the current offset parameter includes:
determining a target configuration strategy corresponding to the current offset parameter based on the corresponding relation between the set offset parameter and the dynamic parameter configuration strategy;
and configuring dynamic parameters of at least two head portrait image layers according to a target configuration strategy, wherein the dynamic parameters are transparency or moving amplitude.
It should be noted that, a corresponding relationship between the offset parameter and the dynamic parameter configuration policy may be preconfigured in the mobile terminal, and when it is detected that the mobile terminal is offset, the corresponding target configuration policy may be determined based on the current offset parameter, and then the dynamic parameters of the at least two avatar layers are configured based on the target configuration policy. The dynamic parameter configuration policy may be a dynamic parameter corresponding to each avatar layer, a relationship between a dynamic parameter of each avatar layer and an offset parameter, a relationship between dynamic parameters of each avatar layer, or a custom configuration.
If the dynamic parameter configuration strategy is the dynamic parameter corresponding to each head portrait layer, each dynamic parameter in the target configuration strategy can be directly used as the dynamic parameter of at least two head portrait layers; if the dynamic parameter configuration strategy is the relationship between the dynamic parameters and the offset parameters of each head portrait layer, the dynamic parameters of at least two head portrait layers can be calculated and determined based on the current offset parameters and the relationship in the target configuration strategy; if the dynamic parameter configuration policy is a dynamic parameter of the background layer and a parameter change of the display object layer compared with the background layer, the dynamic parameter of each head portrait layer may be determined based on the dynamic parameter of the background image in the target configuration policy and the parameter change of the display object layer compared with the background layer.
In addition, in addition to determining a target configuration policy corresponding to the current offset parameter based on the corresponding relationship between the set offset parameter and the dynamic parameter configuration policy, the method may further receive a manually customized dynamic parameter for at least two avatar layers under the condition that the configuration control is detected to be triggered, that is, the manually customized dynamic parameter configures different avatar layers.
For example, the correspondence between the offset parameter and the dynamic parameter configuration policy may be as shown in table 1 below:
TABLE 1 correspondence table between offset parameters and dynamic parameter configuration policies
Figure BDA0003964781640000071
In practical application, the dynamic parameter may be transparency or moving amplitude, different transparencies or different moving amplitudes, and different dynamic change effects of each avatar layer may be created. Moreover, by configuring the transparency of different layers in the picture, one picture can be converted into other different pictures, and a user can see a plurality of different pictures on the basis of one picture; by configuring the moving amplitude of different layers in the picture, a user can see that some objects in the picture move along with the movement of the mobile terminal, so that the richness of the displayed content is improved, and the user experience is further improved.
For example, fig. 2a is a schematic flow chart of special effect parameter configuration provided in an embodiment of the present application, and as shown in fig. 2a, a head portrait material is divided into a layer a and layers B1-Bn, where the layer a is a background layer, and the layers B1-Bn are display object layers, gyroscope data of a mobile terminal is obtained, special effect parameters of the layers B1-Bn are set based on the gyroscope data, the special effect parameters may be dynamic parameters, such as a moving amplitude and a transparency, and the special effect parameters may further include subsequent frame output parameters.
Fig. 2b is an interface schematic diagram of a special effect parameter configuration according to an embodiment of the present application, and as shown in fig. 2b, the avatar material includes layers 1 to 5, and regular information is displayed in the special effect parameter configuration: and each layer generates deviation in a certain direction based on the deviation direction and the deviation amplitude of the mobile terminal, and the deviation direction and the deviation amplitude both support configuration. Assuming that the mobile terminal is a mobile phone and the mobile direction of the mobile phone is shifted to the right by 1 unit, the layers 1 to 5 may be configured to be shifted to the right by 0.5 unit, shifted to the left by 1 unit, not shifted, shifted to the left by 2 units, and shifted to the right by 0.5 unit, respectively.
In another example, it is assumed that there are 5 avatar layers, which are a background layer 1 and display object layers 1-4, respectively, and the current offset parameter of the mobile terminal is moved leftward by 5 degrees, and it is assumed that the corresponding target configuration policy is determined at this time: the transparency of the background image layer is 0%, the transparency of the display object image layer 1 is 100% (at this time, the content of the display object image layer 1 cannot be seen), the transparency of the display object image layer 2 is 20%, the transparency of the display object image layer 3 is 100%, and the transparency of the display object image layer 4 is 50%, at this time, each head portrait image layer can be configured based on the transparencies, the contents of the object image layers 1 and 3 cannot be seen after configuration is completed, the transparencies of the background image layer, the display object image layer 2 and the display object image layer 4 are different, and the seen effect is different from that of the corresponding original picture.
Step 106: and dynamically displaying the head portrait materials in the head portrait frame according to the dynamic parameters of the at least two head portrait image layers.
It should be noted that after the dynamic parameters of the at least two avatar layers are determined according to the current offset parameter of the mobile terminal, avatar materials can be dynamically rendered and displayed in the avatar frame based on the dynamic parameters, that is, the at least two avatar layers are displayed, the displayed content is the avatar of the user, the avatar of the user can be dynamically displayed in different image layers, the avatar of the user can move along with the movement of the mobile terminal, the avatar display with the interaction function and the special effect of the avatar is provided, the display effect is improved, the dynamic parameters of the dynamic display of the plurality of avatar layers can be customized, the display form is enriched, and the user experience is greatly improved.
In practical application, the head portrait materials can be of various types and different types, and the dynamic display modes in the head portrait frame are different, so that the head portrait materials can be dynamically displayed in the head portrait frame according to the dynamic parameters of at least two head portrait layers based on the types of the head portrait materials in specific implementation.
In an optional implementation manner of this embodiment, when the avatar material is a video, the avatar material is dynamically displayed in the avatar frame according to the dynamic parameters of at least two avatar layers, and a specific implementation process may be as follows:
acquiring time stamps of at least two video image frames;
and dynamically displaying the head portrait materials in the head portrait frame according to the dynamic parameters corresponding to each video image frame and the time stamp.
It should be noted that, when the avatar material is a video, the time stamps of the video image frames may represent the display order of the video image frames, so that when the avatar material is displayed in the avatar frame, the video image frames of the avatar material are cyclically played according to the time stamp order, that is, the displayed user avatar is cyclically played in one segment of video, each video image frame in the video is split into at least two avatar layers, and the at least two avatar layers are configured with corresponding dynamic parameters based on the offset parameters of the mobile terminal; that is, each video image frame in the avatar material may be played according to the time stamp sequence of each video image frame, and at the current time, the dynamic parameters of at least two avatar layers corresponding to the current video image frame may be determined based on the offset parameter of the mobile terminal, and the current video image frame may be dynamically displayed based on the dynamic parameters.
In the embodiment of the application, for each video image frame in a video, dynamic parameters of at least two corresponding head portrait layers can be configured based on a current offset parameter of a mobile terminal, so that when the current video image frame is played in a head portrait frame, different layers of the video image frame can dynamically change along with the offset of the mobile terminal, and a corresponding dynamic effect is displayed; that is, besides playing the video in the head portrait frame as the head portrait, each video image frame of the video can also dynamically change along with the offset of the mobile terminal, presenting a dual dynamic display effect along with the time and the offset of the mobile terminal, enriching the display form, and greatly improving the user experience.
In an optional implementation manner of this embodiment, besides dynamically displaying the avatar in the avatar frame, the avatar may be displayed in an out-of-frame manner, that is, a partial area of the avatar material exceeds the avatar frame, that is, the avatar material carries out-of-frame parameters, at this time, the avatar material is dynamically displayed in the avatar frame according to the dynamic parameters of at least two avatar layers, and the specific implementation process may be as follows:
displaying the head portrait materials in the head portrait frame according to the frame-out parameters and the dynamic parameters of the at least two head portrait layers, wherein the frame-out parameters are used for indicating that the designated area of the target layer in the head portrait materials exceeds the head portrait frame.
It should be noted that the out-frame parameter is used to indicate that a partial area of a certain layer in the avatar material is located outside the avatar frame. Therefore, if the avatar material further carries the out-frame parameter, the avatar material can be displayed in the avatar frame according to the out-frame parameter and the dynamic parameters of the at least two avatar layers.
In practical application, as long as the current avatar material supports out-of-frame display, the frame effect can be displayed when the avatar is displayed, and the out-of-frame display does not affect the issuing of the user identifier of the current avatar, and meanwhile, the independent display size is used, that is, the user identifier with the set size, such as a user level graph, an authentication identifier and the like, can be displayed at the set position on the basis of out-of-frame display.
In an optional implementation manner of this embodiment, the avatar material is displayed in the avatar frame according to the frame-out parameter and the dynamic parameters of the at least two avatar layers, and the specific implementation process may be as follows:
determining a target layer and a frame-out area of the target layer in the head portrait material according to the frame-out parameters, and configuring corresponding display masks according to the target layer and the frame-out area of the target layer;
and according to the dynamic parameters, rendering a target layer in the head portrait frame based on the display mask, and rendering other layers except the target layer in the head portrait frame based on the layer sequence of at least two head portrait layers.
It should be noted that the out-frame parameter is used to indicate that the specified area of the target map layer in the avatar material exceeds the avatar frame, where the target map layer refers to a map layer that needs to be displayed in the manner of exceeding the avatar frame in at least two avatar layers, and if the target map layer is a character map layer, the specified area refers to which part of the target map layer needs to exceed the avatar frame, and if the set area is a character crown area in the character map layer.
Therefore, in practical application, the target layer and the out-frame area of the target layer in the head portrait material may be determined according to the out-frame parameter, then the corresponding display mask is generated based on the target layer and the out-frame area of the target layer, the target layer is rendered based on the display mask, and other layers except the target layer are rendered in the head portrait frame based on the layer sequence of the at least two head portrait layers, so as to achieve an out-frame effect of the designated area of the target layer in the at least two head portrait layers, and meanwhile, the target layer may be displayed in combination with the dynamic parameter, and at the same time, the dynamic display effect and the out-frame display effect are achieved.
In the embodiment of the application, a head portrait material is respectively loaded into a multilayer container according to a plurality of layers, and the frame effect is realized through masking, cutting, zooming and other means; that is, the effect of framing the avatar can be realized by exposing only the specified region of the mask layer depending on the target layer.
In an optional implementation manner of this embodiment, there are many scenes in the application platform that all need to show the user avatar, but some scenes are suitable for showing through special effect manners such as frame-out and dynamic, and some scenes are not suitable for showing through special effect manners such as frame-out and dynamic, and are more suitable for a common showing manner of a static picture, so that according to dynamic parameters of at least two avatar layers, before dynamically showing avatar materials in the avatar frame, the method further includes:
receiving a head portrait display instruction, wherein the head portrait display instruction carries a display scene identifier;
and under the condition that the target display scene indicated by the display scene identification supports special effect display, executing an operation step of dynamically displaying the head portrait materials in the head portrait frame according to the dynamic parameters of the at least two head portrait image layers, wherein the special effect display comprises dynamic display and/or out-of-frame display.
Specifically, the avatar display instruction refers to an instruction triggered by a user through a setting operation in a user platform, the avatar display instruction can indicate that the avatar of the user needs to be displayed in a corresponding display scene, namely, the avatar display instruction carries a display scene identifier, the display scene identifier can indicate a target scene where the avatar needs to be displayed, wherein the display scene can be a personal space, a personal dynamic state, an access dynamic state, a comment page, a personal homepage, a friend list and the like.
In practical application, whether each display scene supports special effect display can be configured in advance based on the characteristics of each display scene, such as the display scenes of personal space, personal dynamic, comment pages, personal homepages and the like, the position for displaying user information is abundant, and special effect display modes of dynamic display and/or frame-out display and the like can be supported, so that the display scenes of personal space, personal dynamic, comment pages, personal homepages and the like can be configured to support special effect display; and the access dynamic and friend lists and the like have more users needing to be displayed, the positions for displaying the user information are limited, and the access dynamic and friend lists and the like are not suitable for special effect display such as dynamic display and/or out-of-frame display, so that the access dynamic and friend lists and the like can be set to not support special effect display.
During specific implementation, when an avatar display instruction is received, avatar materials to be displayed can be obtained, then whether the indicated target scene supports special effect display or not can be determined based on a display scene identifier carried in the avatar display instruction, if the target scene supports special effect display, the avatar materials can be dynamically displayed in an avatar frame according to dynamic parameters of at least two avatar layers, and if the avatar display instruction carries frame-out parameters, frame display can be further implemented.
In addition, if the target scene does not support special effect display, a static picture corresponding to the head portrait material can be directly obtained, the static picture is cut into the shape of a head portrait frame and is embedded into the head portrait frame, and the head portrait material is displayed in a common mode, wherein if the head portrait material is a picture, the static picture corresponding to the head portrait material is the picture; if the head portrait material is a video or a sequence frame animation, a frame can be randomly selected from the head portrait material or a frame is intercepted by a user to be used as a corresponding static picture to be displayed in a common mode.
For example, when the user clicks the personal space, the avatar display instruction is triggered, the avatar display instruction carries an identifier corresponding to the personal space, and assuming that the personal space supports special effect display, the avatar can be displayed in a dynamic display and/or frame-out display mode at the avatar position above the personal space.
As an example, fig. 2c is a schematic diagram of a first avatar display page provided in an embodiment of the present application, which illustrates avatar display in a personal space page, where the personal space page includes a frame-out avatar, a fan number XXX, an attention number XXX, a praise number XXX, and an attention control "attention +"; fig. 2d is a schematic diagram of a second avatar display page provided in an embodiment of the present application, which illustrates an avatar display in a dynamic page in a personal dynamic scene, where the dynamic page includes a framed avatar, a release message "the video is posted 11 minutes ago", and specific video content and a text explanation "xxxxxxxx" are released; fig. 2e is a schematic diagram of a third avatar display page provided in an embodiment of the present application, illustrating an avatar display under a comment page, where a framed avatar of a user, a corresponding comment time, and a comment content are displayed in the comment page; fig. 2f is a schematic diagram of a fourth avatar display page provided in an embodiment of the present application, illustrating an avatar display in a personal homepage, in which an out-frame avatar of a user is displayed, and user basic information including a user name, a user rating, virtual resources, and the like. As shown in fig. 2c-2f, the circles represent the head portrait frame, and the head and tail of the mouse are beyond the head portrait frame, so that the head portrait is displayed in the frame.
In addition, for the list type scene, in order to ensure the performance of the list page, the number of the head portraits displayed in the special effect can be limited, that is, only the head portraits with the limited number are displayed in the special effect mode in the list page, and the rest head portraits are displayed in the common mode, if the number of the head portraits is limited to 10, only the first 10 head portraits are displayed in the special effect mode in the list page all the time, and the rest head portraits are displayed in the common mode.
In the embodiment of the application, different display scenes can be distinguished, and dynamic display and/or out-of-frame display are/is carried out under the condition that the current display scene supports special effect display; under the condition that the current display scene does not support special effect display, common display is carried out, the flexibility of head portrait display is high, and the head portrait display method can adapt to various different display scenes.
In an optional implementation manner of this embodiment, a pendant may be further added to the displayed avatar, that is, after the avatar material is dynamically displayed in the avatar frame according to the dynamic parameters of the at least two avatar image layers, the method further includes:
receiving a pendant adding instruction aiming at the displayed head portrait material, and displaying a replacement confirmation control to a user;
and under the condition that the replacement confirmation control is triggered, acquiring a static picture corresponding to the head portrait material, embedding the static picture into the head portrait frame for displaying, and adding a target head portrait pendant indicated by the pendant adding instruction on the static picture.
It should be noted that, if an avatar pendant is to be added, the current special effect display (including dynamic display and/or out-of-frame display) needs to be cancelled, so that when a pendant adding instruction for the displayed avatar material is received, a change confirmation control can be displayed to the user, and a prompt that the current special effect display may be lost is given, and if the change confirmation control is triggered, the user confirms that the current special effect display is cancelled, changes to the normal display, and adds a corresponding avatar pendant.
For example, fig. 2g is a schematic flow chart of adding a head portrait pendant according to an embodiment of the present application, and as shown in fig. 2g, the head portrait pendant is added, and it is determined whether the current head portrait is a special-effect head portrait (i.e., dynamic display and/or out-of-frame display), and if not, the head portrait pendant is directly added to the current head portrait. If yes, displaying a reminding pop-up window, reminding a user that special effect display (dynamic display and/or out-of-frame display) is cancelled when the user adds the head portrait hanging piece, judging whether the user confirms, if yes, switching the special effect display to common display, and adding the head portrait hanging piece; if not, returning to the step of adding the head portrait pendant.
In the embodiment of the application, when the special-effect head portrait is used, if the head portrait hanging piece is selected to be worn, the confirmation popup window can be called, if the user confirms to replace the special-effect head portrait, the special-effect head portrait is switched to be displayed as a common head portrait, and the corresponding head portrait hanging piece is added. Thus, when the special effect display fails, the display can be switched to the common mode without the special effect (the out-of-frame effect layer is displayed as a static graph).
In addition, if the current application program of the mobile terminal of the user is not the latest version, special effect display is not supported, and the display is a common style in the low-version application program of the user, but because the avatar of the user is a special effect style displayed in the high-version application programs of other users, if the user wants to add an avatar hanger on the avatar, secondary confirmation can still be triggered.
In an optional implementation manner of this embodiment, the user may further replace the current avatar, that is, after dynamically displaying the avatar material in the avatar frame according to the dynamic parameters of the at least two avatar layers, the method further includes:
receiving a head portrait replacing instruction, wherein the head portrait replacing instruction carries head portrait updating materials;
determining a first display parameter of the updated head portrait material, and determining a second display parameter of the current head portrait material in the head portrait frame;
and replacing the current head portrait material in the head portrait frame with the updated head portrait material based on the first display parameter and the second display parameter.
It should be noted that after receiving the avatar replacement instruction, how to specifically replace the avatar can be determined based on the display parameters of the avatar material before and after the update, so as to ensure that the common avatar and the special-effect avatar can be switched to each other, and ensure the display effect.
In an optional implementation manner of this embodiment, the first display parameter is special effect display, the special effect display includes dynamic display and/or out-of-frame display, and the second display parameter is general display; based on the first display parameter and the second display parameter, replacing the current head portrait material in the head portrait frame with the updated head portrait material, including:
determining whether a head portrait pendant exists in the current head portrait material;
if the head portrait pendant exists, removing the head portrait pendant of the current head portrait material, and replacing the current head portrait material with the updated head portrait material;
and if the head portrait pendant does not exist, replacing the current head portrait material with the updated head portrait material.
In practical application, fig. 2h is a schematic flow diagram of replacing an avatar according to an embodiment of the present application, and as shown in fig. 2h, an avatar replacement instruction is received, whether a current avatar material is a special effect display is determined, if the current avatar material is the special effect display, whether an updated avatar material is the special effect display is determined, and if yes, the current avatar material is directly replaced with the updated avatar material; if not, replacing the current head portrait material with the updated head portrait material, and adding a head portrait pendant to the updated head portrait material. If the current head portrait material is not the special effect display, determining whether the updated head portrait material is the special effect display, and if not, directly replacing the current head portrait material with the updated head portrait material; if the current head portrait material does not have the head portrait pendant, replacing the current head portrait material with the updated head portrait material. Therefore, the method determines how to specifically change the head portrait based on the display parameters of the head portrait materials before and after updating, so that the common head portrait and the special-effect head portrait can be switched, and the display effect is ensured.
According to the method for displaying the head portrait, after the head portrait material is obtained, the head portrait material can be divided into at least two head portrait layers, the dynamic parameter of each head portrait layer is determined based on the current offset parameter of the mobile terminal, and rendering display is performed in a head portrait frame based on the dynamic parameter of each head portrait layer so as to dynamically display the obtained head portrait material in the head portrait frame; in addition, partial areas of the target image layers in the at least two head portrait image layers can exceed the head portrait frame, and frame display is achieved. So, to head portrait material division picture layer, render based on the dynamic parameter and the play frame parameter on every picture layer, can realize head portrait material along with mobile terminal's skew and dynamic change's effect in head portrait frame to and some picture layers can go out the effect that the frame demonstrates, provide the interactive mode of a head portrait, provide more special bandwagon effect for the user, and can self-define the dynamic parameter and the play frame parameter that a plurality of head portrait picture layers developments were demonstrateed, enriched the show form, user experience has been promoted greatly.
The following description further describes the avatar display method with reference to fig. 3, taking the application of the avatar display method provided in the present application in a video platform as an example. Fig. 3 shows a processing flow chart of an avatar display method applied in a video platform according to an embodiment of the present application, which specifically includes the following steps:
step 302: the video platform acquires a head portrait material to be displayed, wherein the head portrait material is a picture and carries frame-out parameters.
Step 304: the video platform splits the head portrait material into at least two head portrait layers, wherein the at least two head portrait layers comprise a background layer and at least one display object layer.
Step 306: the video platform acquires the current offset parameter of the mobile terminal, and determines the dynamic parameters of at least two head portrait layers according to the current offset parameter, wherein the dynamic parameters of different head portrait layers are different.
Step 308: the video platform determines a target layer and a frame-out area of the target layer in the head portrait material according to the frame-out parameters, and configures a corresponding display mask according to the target layer and the frame-out area of the target layer.
Step 310: and the video platform renders a target layer based on the display mask at the head portrait frame according to the dynamic parameters, renders other layers except the target layer at the head portrait frame based on the layer sequence of at least two head portrait layers, and the specified area of the target layer in the head portrait material exceeds the head portrait frame so as to realize the special effect of dynamic display and out-of-frame display.
Step 312: and the video platform receives a pendant adding instruction aiming at the displayed head portrait material and displays the replacement confirmation control for the user.
Step 314: and the video platform acquires a static picture corresponding to the head portrait material under the condition that the replacement confirmation control is triggered, embeds the static picture into the head portrait frame for display, and adds a target head portrait pendant indicated by the pendant adding instruction on the static picture.
Step 316: the method comprises the steps that a video platform receives a head portrait replacing instruction, wherein head portrait replacing instructions carry head portrait updating materials; and determining a first display parameter of the updated head portrait material, and determining a second display parameter of the current head portrait material in the head portrait frame.
Step 318: the video platform determines whether the head portrait material has a head portrait pendant or not under the condition that the first display parameter is special effect display and the second display parameter is ordinary display; if the head portrait pendant exists, removing the head portrait pendant of the current head portrait material, and replacing the current head portrait material with the updated head portrait material; and if the head portrait pendant does not exist, replacing the current head portrait material with the updated head portrait material.
The special effect display comprises dynamic display and/or frame-out display.
According to the method for displaying the head portrait, after the video platform obtains the head portrait materials, the video platform can divide the head portrait materials into at least two head portrait layers, dynamic parameters of each head portrait layer are determined based on current offset parameters of the mobile terminal, rendering display is carried out in a head portrait frame based on the dynamic parameters of each head portrait layer, and the obtained head portrait materials are dynamically displayed in the head portrait frame; in addition, partial areas of the target image layers in the at least two head portrait image layers can exceed the head portrait frame, and frame-out display is achieved. Therefore, the layers are divided for the head portrait materials, the effect that the head portrait materials dynamically change along with the offset of the mobile terminal can be achieved in the head portrait frame based on the dynamic parameters of each layer and the frame-out parameters, the effect that some layers can be displayed in the frame-out mode is achieved, the interactive mode of the head portrait is provided, more special display effects are provided for a user, the dynamic parameters and the frame-out parameters of the dynamic display of a plurality of head portrait layers can be defined by the user, the display mode is enriched, and the user experience is greatly improved.
Corresponding to the above method embodiment, the present application further provides an embodiment of a head portrait display apparatus, and fig. 4 shows a schematic structural diagram of the head portrait display apparatus provided in an embodiment of the present application. As shown in fig. 4, the apparatus includes:
a first determining module 402, configured to obtain an avatar material to be displayed, and determine at least two avatar layers corresponding to the avatar material;
a second determining module 404, configured to obtain a current offset parameter of the mobile terminal, and determine dynamic parameters of at least two avatar layers according to the current offset parameter, where the dynamic parameters of different avatar layers are different;
and the display module 406 is configured to dynamically display the avatar material in the avatar frame according to the dynamic parameters of the at least two avatar layers.
Optionally, the head portrait material is a video; a first determining module 402, further configured to:
intercepting a head portrait material according to a set interval to obtain at least two video image frames;
determining at least two head portrait layers corresponding to the video image frames aiming at each video image frame;
accordingly, the presentation module 406 is further configured to:
acquiring time stamps of at least two video image frames;
and dynamically displaying the head portrait materials in the head portrait frame according to the dynamic parameters corresponding to each video image frame and the time stamp.
Optionally, the head portrait material is a picture; a first determining module 402, further configured to:
splitting the head portrait material into at least two head portrait layers, wherein the at least two head portrait layers comprise a background layer and at least one display object layer.
Optionally, the second determining module 404 is further configured to:
determining a target configuration strategy corresponding to the current offset parameter based on the corresponding relation between the set offset parameter and the dynamic parameter configuration strategy;
and configuring dynamic parameters of at least two head portrait image layers according to a target configuration strategy, wherein the dynamic parameters are transparency or moving amplitude.
Optionally, the avatar material carries out frame parameters; a presentation module 406, further configured to:
displaying the head portrait materials in the head portrait frame according to the frame-out parameters and the dynamic parameters of the at least two head portrait layers, wherein the frame-out parameters are used for indicating that the designated area of the target layer in the head portrait materials exceeds the head portrait frame.
Optionally, the presentation module 406 is further configured to:
determining a target layer and a frame-out area of the target layer in the head portrait material according to the frame-out parameters, and configuring corresponding display masks according to the target layer and the frame-out area of the target layer;
and according to the dynamic parameters, rendering a target layer in the head portrait frame based on the display mask, and rendering other layers except the target layer in the head portrait frame based on the layer sequence of at least two head portrait layers.
Optionally, the apparatus further comprises an adding module configured to:
receiving a pendant adding instruction aiming at the displayed head portrait material, and displaying a replacement confirmation control to a user;
and under the condition that the replacement confirmation control is triggered, obtaining a static picture corresponding to the head portrait material, embedding the static picture into the head portrait frame for display, and adding a target head portrait pendant indicated by the pendant adding instruction on the static picture.
Optionally, the apparatus further comprises a replacement module configured to:
receiving a head portrait replacing instruction, wherein the head portrait replacing instruction carries head portrait updating materials;
determining a first display parameter of the updated head portrait material, and determining a second display parameter of the current head portrait material in the head portrait frame;
and replacing the current head portrait materials in the head portrait frame with the updated head portrait materials based on the first display parameters and the second display parameters.
Optionally, the first display parameter is special effect display, the special effect display comprises dynamic display and/or out-of-frame display, and the second display parameter is common display; a replacement module further configured to:
determining whether a head portrait pendant exists in the current head portrait material;
if the head portrait pendant exists, removing the head portrait pendant of the current head portrait material, and replacing the current head portrait material with the updated head portrait material;
and if the head portrait pendant does not exist, replacing the current head portrait material with the updated head portrait material.
Optionally, the apparatus further comprises a receiving module configured to:
receiving a head portrait display instruction, wherein the head portrait display instruction carries a display scene identifier;
the display module 406 is executed in a case that the target display scene indicated by the display scene identification supports special effect display, where the special effect display includes dynamic display and/or out-of-frame display.
According to the head portrait display device, after the head portrait material is obtained, the head portrait material can be divided into at least two head portrait layers, the dynamic parameter of each head portrait layer is determined based on the current offset parameter of the mobile terminal, and rendering display is performed in a head portrait frame based on the dynamic parameter of each head portrait layer so as to dynamically display the obtained head portrait material in the head portrait frame; in addition, partial areas of the target image layers in the at least two head portrait image layers can exceed the head portrait frame, and frame display is achieved. So, to head portrait material division picture layer, render based on the dynamic parameter and the play frame parameter on every picture layer, can realize head portrait material along with mobile terminal's skew and dynamic change's effect in head portrait frame to and some picture layers can go out the effect that the frame demonstrates, provide the interactive mode of a head portrait, provide more special bandwagon effect for the user, and can self-define the dynamic parameter and the play frame parameter that a plurality of head portrait picture layers developments were demonstrateed, enriched the show form, user experience has been promoted greatly.
The above is a schematic scheme of the avatar display apparatus of this embodiment. It should be noted that the technical solution of the avatar display apparatus and the technical solution of the avatar display method belong to the same concept, and details that are not described in detail in the technical solution of the avatar display apparatus can be referred to the description of the technical solution of the avatar display method.
Fig. 5 shows a block diagram of a computing device according to an embodiment of the present application. The components of the computing device 500 include, but are not limited to, a memory 510 and a processor 520. Processor 520 is coupled to memory 510 via bus 530, and database 550 is used to store data.
Computing device 500 also includes access device 540, access device 540 enabling computing device 500 to communicate via one or more networks 560. Examples of such networks include a Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The Access device 540 may include one or more of any type of Network Interface (e.g., a Network Interface Controller (NIC)) whether wired or Wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) Wireless Interface, a Worldwide Interoperability for Microwave Access (Wi-MAX) Interface, an ethernet Interface, a Universal Serial Bus (USB) Interface, a cellular Network Interface, a bluetooth Interface, a Near Field Communication (NFC) Interface, and so forth.
In one embodiment of the application, the above-described components of computing device 500 and other components not shown in FIG. 5 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 5 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 500 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 500 may also be a mobile or stationary server.
The processor 520 is configured to execute the following computer-executable instructions to implement the operation steps of any of the avatar display methods described above.
The foregoing is a schematic diagram of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the avatar display method described above belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the avatar display method described above.
An embodiment of the present application further provides a computer-readable storage medium storing computer-executable instructions, which when executed by a processor, are used for implementing the steps of any of the avatar display methods described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the avatar display method described above, and for details that are not described in detail in the technical solution of the storage medium, reference may be made to the description of the technical solution of the avatar display method described above.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (13)

1. A head portrait display method is characterized by comprising the following steps:
acquiring a head portrait material to be displayed, and determining at least two head portrait layers corresponding to the head portrait material;
acquiring a current offset parameter of the mobile terminal, and determining dynamic parameters of the at least two head portrait layers according to the current offset parameter, wherein the dynamic parameters of different head portrait layers are different;
and dynamically displaying the head portrait materials in a head portrait frame according to the dynamic parameters of the at least two head portrait image layers.
2. The method for displaying the avatar of claim 1, wherein the avatar material is a video; the determining of the at least two head portrait layers corresponding to the head portrait material includes:
intercepting the head portrait material according to a set interval to obtain at least two video image frames;
determining at least two head portrait layers corresponding to each video image frame;
correspondingly, the dynamically displaying the avatar material in the avatar frame according to the dynamic parameters of the at least two avatar layers includes:
acquiring time stamps of the at least two video image frames;
and dynamically displaying the avatar material in the avatar frame according to the time stamp according to the dynamic parameters corresponding to each video image frame.
3. The method for displaying the avatar of claim 1, wherein the avatar material is a picture; the determining of the at least two head portrait layers corresponding to the head portrait material includes:
splitting the head portrait material into at least two head portrait layers, wherein the at least two head portrait layers comprise a background layer and at least one display object layer.
4. The method for displaying the avatar of claim 1, wherein said determining the dynamic parameters of said at least two avatar layers according to said current offset parameter comprises:
determining a target configuration strategy corresponding to the current offset parameter based on the corresponding relation between the set offset parameter and the dynamic parameter configuration strategy;
and configuring dynamic parameters of the at least two head portrait layers according to the target configuration strategy, wherein the dynamic parameters are transparency or moving amplitude.
5. The avatar display method of claim 1, wherein said avatar material carries out framing parameters; the dynamically displaying the avatar material in the avatar frame according to the dynamic parameters of the at least two avatar layers includes:
and displaying the head portrait materials in the head portrait frame according to the frame-out parameters and the dynamic parameters of the at least two head portrait layers, wherein the frame-out parameters are used for indicating that the specified area of the target layer in the head portrait materials exceeds the head portrait frame.
6. The method for displaying the avatar according to claim 5, wherein said displaying the avatar material in the avatar frame according to the out-frame parameter and the dynamic parameters of the at least two avatar layers comprises:
determining a target layer in the head portrait material and a frame-out area of the target layer according to the frame-out parameters, and configuring a corresponding display mask according to the target layer and the frame-out area of the target layer;
and according to the dynamic parameters, rendering the target layer in the head portrait frame based on the display mask, and rendering other layers except the target layer in the head portrait frame based on the layer sequence of the at least two head portrait layers.
7. The method for displaying the avatar according to any one of claims 1-6, wherein after dynamically displaying the avatar material in the avatar frame according to the dynamic parameters of the at least two avatar layers, the method further comprises:
receiving a pendant adding instruction aiming at the displayed head portrait material, and displaying a replacement confirmation control to a user;
and under the condition that the replacement confirmation control is triggered, acquiring a static picture corresponding to the head portrait material, embedding the static picture into the head portrait frame for display, and adding a target head portrait hanging piece indicated by the hanging piece adding instruction on the static picture.
8. The method according to any one of claims 1 to 6, wherein after dynamically displaying the avatar material in an avatar frame according to the dynamic parameters of the at least two avatar layers, the method further comprises:
receiving a head portrait replacing instruction, wherein the head portrait replacing instruction carries an updated head portrait material;
determining a first display parameter of the updated head portrait material, and determining a second display parameter of the current head portrait material in the head portrait frame;
and replacing the current head portrait material in the head portrait frame with the updated head portrait material based on the first display parameter and the second display parameter.
9. The avatar display method of claim 8, wherein the first display parameter is special effect display, the special effect display comprises dynamic display and/or out-of-frame display, and the second display parameter is normal display; replacing the current avatar material in the avatar frame with the updated avatar material based on the first display parameter and the second display parameter includes:
determining whether the current head portrait material has a head portrait pendant or not;
if the head portrait hanging piece exists, removing the head portrait hanging piece of the current head portrait material, and replacing the current head portrait material with the updated head portrait material;
and if the head portrait pendant does not exist, replacing the current head portrait material with the updated head portrait material.
10. The method for displaying the avatar according to any one of claims 1-6, wherein before dynamically displaying the avatar material in the avatar frame according to the dynamic parameters of the at least two avatar layers, the method further comprises:
receiving an avatar display instruction, wherein the avatar display instruction carries a display scene identifier;
and under the condition that the target display scene indicated by the display scene identification supports special effect display, executing the operation step of dynamically displaying the head portrait materials in the head portrait frame according to the dynamic parameters of the at least two head portrait layers, wherein the special effect display comprises dynamic display and/or out-of-frame display.
11. An avatar display apparatus, comprising:
the first determining module is configured to acquire a head portrait material to be displayed and determine at least two head portrait layers corresponding to the head portrait material;
the second determining module is configured to acquire a current offset parameter of the mobile terminal, and determine dynamic parameters of the at least two head portrait layers according to the current offset parameter, wherein the dynamic parameters of different head portrait layers are different;
and the display module is configured to dynamically display the avatar materials in the avatar frame according to the dynamic parameters of the at least two avatar layers.
12. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions, and the processor is configured to execute the computer-executable instructions to implement the operational steps of the avatar rendering method of any of claims 1-10.
13. A computer-readable storage medium, having stored thereon computer-executable instructions which, when executed by a processor, carry out the operational steps of the avatar presentation method of any of claims 1-10.
CN202211490527.8A 2022-11-25 2022-11-25 Head portrait display method and device Pending CN115760628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211490527.8A CN115760628A (en) 2022-11-25 2022-11-25 Head portrait display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211490527.8A CN115760628A (en) 2022-11-25 2022-11-25 Head portrait display method and device

Publications (1)

Publication Number Publication Date
CN115760628A true CN115760628A (en) 2023-03-07

Family

ID=85337978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211490527.8A Pending CN115760628A (en) 2022-11-25 2022-11-25 Head portrait display method and device

Country Status (1)

Country Link
CN (1) CN115760628A (en)

Similar Documents

Publication Publication Date Title
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
US20220407735A1 (en) Presenting participant reactions within a virtual conferencing system
US11348301B2 (en) Avatar style transformation using neural networks
US11778462B2 (en) Live greetings
CN112822541B (en) Video generation method and device, electronic equipment and computer readable medium
CN112868224B (en) Method, apparatus and storage medium for capturing and editing dynamic depth image
CN113785288A (en) System and method for generating and sharing content
US11491406B2 (en) Game drawer
CN111970571B (en) Video production method, device, equipment and storage medium
CN113411664B (en) Video processing method and device based on sub-application and computer equipment
CN106569700B (en) Screenshot method and screenshot device
US20150130816A1 (en) Computer-implemented methods and systems for creating multimedia animation presentations
US11048746B2 (en) Screen capture data amalgamation
CN115462089A (en) Displaying augmented reality content in messaging applications
CN110058854B (en) Method, terminal device and computer-readable medium for generating application
US20210019347A1 (en) Normative process of interaction with a registry of virtual spaces
CN106155508B (en) Information processing method and client
CN113127126A (en) Object display method and device
CN115760628A (en) Head portrait display method and device
Hoelzl et al. CODEC: on Thomas Ruff's JPEGs
US20200342166A1 (en) Systems and methods for digital image editing
CN106375825A (en) Picture generation processing method and apparatus, and picture display processing method and apparatus
US11870745B1 (en) Media gallery sharing and management
CN115830283A (en) System and method for generating VR exhibition room scene
US20210051267A1 (en) Enhancing images using environmental context

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination