CN115379269A - Live broadcast interaction method of virtual image, computing equipment and storage medium - Google Patents

Live broadcast interaction method of virtual image, computing equipment and storage medium Download PDF

Info

Publication number
CN115379269A
CN115379269A CN202210988979.2A CN202210988979A CN115379269A CN 115379269 A CN115379269 A CN 115379269A CN 202210988979 A CN202210988979 A CN 202210988979A CN 115379269 A CN115379269 A CN 115379269A
Authority
CN
China
Prior art keywords
user
characteristic information
target
avatar
feature information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210988979.2A
Other languages
Chinese (zh)
Inventor
刘博�
李琳
郑彬戈
吴耀华
高山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
Original Assignee
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Migu Cultural Technology Co Ltd, China Mobile Communications Group Co Ltd filed Critical Migu Cultural Technology Co Ltd
Priority to CN202210988979.2A priority Critical patent/CN115379269A/en
Publication of CN115379269A publication Critical patent/CN115379269A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a live virtual image interaction method, a computing device and a storage medium, wherein the method comprises the following steps: receiving first characteristic information of a user, and determining a target user corresponding to the characteristic information of a live broadcast user covering a preset range according to the first characteristic information; receiving second characteristic information of the target user, and generating an avatar according to the second characteristic information; and issuing the virtual image. According to the invention, the target user covering the live broadcast user in the preset range is screened and determined according to the first characteristic information of the user, and the personalized virtual image corresponding to the target user is generated, so that the homogenization problem is avoided.

Description

Live broadcast interaction method of virtual image, computing equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of live broadcast, in particular to a live broadcast interaction method of an avatar, computing equipment and a storage medium.
Background
When the anchor is broadcast directly, the live broadcast can be carried out by adopting an avatar. The method comprises the steps of firstly collecting portrait expressions of a anchor through collection equipment such as a camera and the like, extracting a plurality of facial feature points of the anchor, and controlling the facial state of a pre-constructed virtual image according to the facial feature points to realize live broadcast by utilizing the virtual image.
However, the prior live virtual image broadcast method has serious homogenization of virtual image characters and poor experience effect.
Disclosure of Invention
In view of the above, embodiments of the present invention are proposed to provide a live avatar interaction method, a computing device and a storage medium that overcome or at least partially solve the above problems.
According to an aspect of an embodiment of the present invention, there is provided a live virtual image interaction method, where the method is executed in a server, and the method includes:
receiving first characteristic information of a user, and determining a target user corresponding to the characteristic information of a live broadcast user covering a preset range according to the first characteristic information;
receiving second characteristic information of the target user, and generating an avatar according to the second characteristic information;
and issuing the virtual image.
According to another aspect of the embodiments of the present invention, there is provided a live interactive method for an avatar, the method being performed at a user side, the method comprising:
uploading first characteristic information of a user to a server, so that the server determines a target user meeting a preset interaction participation condition according to the first characteristic information;
if the user is the target user, acquiring and uploading second characteristic information of the user in real time;
and receiving and displaying the virtual image, wherein the virtual image is generated according to the second characteristic information.
According to yet another aspect of embodiments of the present invention, there is provided a computing device including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the virtual image live broadcast interaction method.
According to a further aspect of the embodiments of the present invention, there is provided a computer storage medium, in which at least one executable instruction is stored, and the executable instruction causes a processor to perform an operation corresponding to the above-mentioned avatar live broadcast interaction method.
According to the live broadcast interaction method, the computing device and the storage medium of the virtual image, provided by the embodiment of the invention, the target user covering the live broadcast user in the preset range is screened and determined according to the first characteristic information of the user, the personalized virtual image corresponding to the target user is generated, and the homogenization problem is avoided.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the embodiments of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the embodiments of the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flow diagram of a method of avatar live interaction according to one embodiment of the present invention;
fig. 2a is a schematic diagram showing the connection mode of the server, the anchor terminal and the user terminal;
FIG. 2b is a diagram of a user displaying an interactive portal and displaying an avatar;
FIG. 2c is a timing diagram of the client and the server in generating the avatar;
FIG. 2d is a schematic diagram of the interaction between the client and the server when the avatar is generated;
FIG. 2e shows a timing diagram between the anchor and the server when the anchor controls the avatar;
fig. 2f shows a schematic view of the interaction between the anchor and the server when the anchor controls the avatar;
fig. 3 shows a flow diagram of a method of avatar live interaction in accordance with another embodiment of the present invention;
fig. 4 shows a flow diagram of a method of avatar live interaction in accordance with a further embodiment of the present invention;
fig. 5 shows a schematic structural diagram of an avatar live interaction system according to an embodiment of the present invention;
FIG. 6 illustrates a schematic structural diagram of a computing device, according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 shows a flow chart of a method for avatar live interaction according to an embodiment of the present invention, as shown in fig. 1, the method includes the steps of:
step S101, a server receives first characteristic information of a user uploaded by a user side, and determines a target user corresponding to the characteristic information of a live broadcast user covering a preset range according to the first characteristic information.
The embodiment comprises operations of a main broadcast end, a user end and a server, wherein the main broadcast end is a device for the main broadcast to carry out live broadcast, the user end is a device for a user to watch the live broadcast, and the server receives requests, information and the like uploaded by the main broadcast end and the user end to realize synchronous interaction of the virtual image with the main broadcast and the user. The connection of the three terminals is shown in fig. 2 a.
The anchor terminal can open an interactive entrance by the anchor, such as opening a reloading interactive entrance. And when the anchor starts the operation on the interactive entrance at the anchor, sending an interactive entrance starting request to the server. When the server receives an interactive entry opening request sent by the anchor terminal, the server issues an interactive message to each user terminal watching the anchor live broadcast, and the interactive entry is displayed at each user terminal, as shown in fig. 2b, an icon showing "interactive bar with me" on the right side of the user terminal is an interactive entry, which is exemplified here, and the specific display style is set according to the implementation situation.
After the user side displays the interactive entrance, each user can trigger the operation of the interactive entrance according to the requirement to participate in the interaction. When a user triggers the operation of the interactive entrance, first feature information of the user such as the sex, the facial features, the facial key points, the skin color and the body type of the user can be collected through equipment such as a camera, the age, the weight, the height and the like of the user can be determined based on the collected first feature information of the user, and the first feature information is uploaded to a server.
The server can input the first feature information into a preset classification model, obtains target feature information of live broadcast users covering a preset range through screening based on the preset classification model, determines target users matched with the target feature information, and returns participation interaction results of target user side screening corresponding to the target users. Specifically, the first feature information is input into a preset classification model, the target feature information of the live broadcast user capable of covering a preset range is determined based on analysis of the preset classification model, any classification model can be adopted in the preset classification model, the preset classification model can be obtained by training according to historical sample data in advance, and the preset classification model is not limited here. Inputting the first characteristic information into a preset classification model, determining target characteristic information of live broadcast users which can cover a preset range, such as 90% of the number of the live broadcast users, by the preset classification model, and screening and determining users corresponding to the user information with the same target characteristic information as target users according to the target characteristic information. Further, the server may input all the uploaded first feature information to a preset classification model, or may classify the first feature information into feature information of different dimensions according to the first feature information in combination with user history data, such as information of a history viewing record, a history consumption record, a history viewing live broadcast room type, a consumption type, and the like, and determine respective feature weights corresponding to the first feature information. According to the feature weights, the first feature information can be ranked, a preset number of first feature information ranked in advance is obtained to serve as the first feature information for screening the target users, and the influence of unimportant first feature information on the target feature information is avoided.
Further, for the user who fails in the screening, the server may also issue the interaction result of the participation that fails in the screening to inform the user.
And S102, the target user side receives the screening interaction result returned by the server, collects and uploads the second characteristic information of the target user in real time, and the server receives the second characteristic information of the target user and generates a virtual image according to the second characteristic information.
And when the participation interaction result returned by the server received by the target user side is screened, acquiring second characteristic information of the target user in real time. The second feature information comprises information such as facial features, facial key point positions, limb point positions, actions and the like, and the second feature information is uploaded to the server. And the server generates an avatar corresponding to the target user based on the combination of the second characteristic information and the preset virtual model. The preset virtual model is a preset virtual image template, and is combined with the second feature information based on the preset virtual model, such as adjusting the facial features, key point positions and the like of the preset virtual model, and the virtual image of the target user object is generated through technologies such as face fusion and the like.
Step S103, the server issues the virtual image, and the user side receives and displays the virtual image.
The server issues the generated virtual images, the user side displays the virtual images corresponding to the second characteristic information of the target users, the display effect is as shown in fig. 2b, the lower part of fig. 2b comprises 3 virtual images corresponding to different target users, and the problem of virtual image homogenization is avoided. Besides the target user side displays the virtual image, other user sides also display the virtual image, so that other users can conveniently refer to the display effect of the virtual image which is consistent with the user side.
Optionally, this embodiment may further include the following steps:
and step S104, the target user side collects and uploads the third characteristic information of the user in real time, the server continuously receives the third characteristic information of the target user and continuously maps the third characteristic information to the virtual image, and the user side continuously receives and displays the virtual image mapped by the third characteristic information.
The virtual images displayed by the user sides can synchronously interact according to third characteristic information acquired by a target user in real time, the third characteristic information of the target user is acquired by the target user in real time and uploaded to the server, the server maps the continuously received third characteristic information of the target user to the virtual images through image action driving technology and the like, synchronous interaction between the virtual images and the target user is achieved, if the target user a lifts an arm, the corresponding virtual images a synchronously lift the arm, interest degree of participation of the user is improved, the user sides also continuously receive and display the virtual images mapped by the third characteristic information, and synchronous interaction between the virtual images and the target user is checked in real time. Further, if the target user side cannot acquire the action information of the third characteristic information in real time, the voice password information, the text instruction information and the like of the target user can be acquired, the voice password information, the text instruction information and the like of the target user are converted into action information to be uploaded to the server, and synchronous interaction between the virtual image and the target user is achieved.
The target user may also manipulate the avatar to interact with the anchor, such as gifting a virtual gift to the anchor, dancing, tiling, and so forth. Specifically, interaction instructions can be preset, such as preset action instructions, preset voice instructions and the like, when a target user carries out preset actions, the preset action instructions are triggered and uploaded to the server, and the server controls the corresponding virtual images to complete interaction operations of the preset interaction instructions so as to interact with the anchor.
The above characteristic information is collected under the permission of the user.
The sequence diagrams of steps S103 to S104 are shown in fig. 2c, the avatar in fig. 2c is illustrated by taking an avatar as an example, the user side triggers the operation on the interactive portal, sends an entry request to the server, sends the collected first feature information of the user, the server screens the user and informs the screening result, the target user collects second feature information including face points (namely face key points), limbs (limb actions and limb points) and the like to the server, the server generates a corresponding avatar for the target user, and the user side can display the avatar. The above description is provided for illustration and is not intended to be limiting. Fig. 2d is a diagram of an interaction process between the user side and the server, and the specific process refers to the description of each step, which is not described herein again.
Step S105, the server receives an avatar control request triggered by the anchor terminal, continuously acquires fourth feature information of the anchor, and continuously maps the fourth feature information to the avatar.
In this embodiment, the avatar may perform synchronous interaction according to third feature information such as the action and expression of the target user, and the anchor terminal may further control the avatar by the anchor according to an avatar control instruction triggered by the anchor, so as to present the synchronous interaction between the avatar and the anchor.
Specifically, if the anchor carries out the change dress show live broadcasting, the anchor shows various clothes, for convenience of a user to check the show effect of the clothes and the like on the virtual image, the anchor can control the virtual image, so that the virtual image wears the corresponding clothes and the like, and the synchronous interaction of the virtual image and the anchor is presented. The avatar control instruction can be preset, such as a certain action, a password and the like, when the anchor action or the voice password triggers the avatar control instruction, the server receives an avatar control request triggered by the anchor terminal, the anchor terminal can collect fourth feature information of the anchor in real time, and the server continuously acquires the fourth feature information of the anchor, wherein the fourth feature information comprises one or more of information such as clothing, facial point positions, limb point positions and actions of the anchor. And the server maps the continuously acquired fourth characteristic information to the virtual image, and the effect of the anchor controlling the synchronous interaction of the virtual image is presented. Here, the server may close the synchronous interaction between the target user and the avatar according to the avatar control request triggered by the anchor, and then continuously map the fourth feature information of the anchor to the avatar, thereby implementing the anchor to control the synchronous interaction of the avatar.
Further, when the avatars of a plurality of target users exist, the anchor can simultaneously control the synchronous interaction of the avatars, such as synchronous reloading and the like; or, one or more avatars can be randomly selected for synchronous interaction, for example, one avatar is selected for synchronous interaction each time, so that different avatars wear different clothes for display, and the like. If the avatar corresponding to the anchor exists, the anchor can control the avatar corresponding to the anchor to perform synchronous interaction, and can also control the avatar corresponding to the target user to perform synchronous interaction, and the setting can be specifically set according to the implementation situation, and is not limited here.
And step S106, the server receives an avatar stop control request triggered by the anchor terminal, stops the mapping of the fourth characteristic information, continuously receives the fifth characteristic information of the target user, and continuously maps the fifth characteristic information to the avatar.
When the anchor triggers the avatar to stop the control command, if the anchor displays and changes the outfit, the avatar is triggered to stop the control command through actions or language passwords, the server receives the avatar stop control request triggered by the anchor, stops the mapping of the fourth characteristic information, closes the synchronous interaction of the anchor to the avatar, reestablishes the synchronous interaction of the avatar and the target user, and continuously receives the fifth characteristic information of the target user, wherein the fifth characteristic information comprises the following steps: one or more of the clothes, the face point positions, the limb point positions, the actions and the like are uploaded by the user side, the server continuously maps the fifth feature information to the virtual image, and the virtual image mapped based on the fifth feature information is reestablished to synchronously interact with the target user.
The sequence diagrams of step S105-step S106 are shown in fig. 2e, fig. 2e illustrates a case of reloading, the anchor terminal triggers an avatar control instruction as a reloading instruction, sends the reloading control instruction to the server, and sends fourth feature information of the anchor, such as a face point location (namely, a face key point location), a limb (a limb action, a limb point location), and the like, to the server, and the server cuts off the interaction of a target user to the avatar, and maps the fourth feature information of the anchor to the avatar, thereby realizing the effect of controlling the avatar interaction by the anchor. And when the anchor sends a command of stopping reloading, the server cuts off the control of the anchor, continuously collects fifth characteristic information of the target user, and maps the fifth characteristic information to the virtual image to realize synchronous interaction between the target user and the virtual image. The above is an example, and is set according to the implementation, and is not limited herein. Fig. 2f is a diagram of an interaction process between the anchor and the server, and the specific process refers to the description of each step, which is not described herein again.
Further, for non-target users, i.e. users who fail to be screened, the user side can also operate the virtual image, such as amplifying and rotating the virtual image, and conveniently checking the reloading effect of the virtual image. Or, the live broadcast room can also provide an entry for generating an individual avatar for a non-target user, for example, the non-target user clicks the avatar generation operation, collects relevant feature information of the non-target user, for example, sixth feature information including facial features, facial key points, limb key points, actions and the like, generates a corresponding avatar at a user side of the non-target user according to the sixth feature information for display, and the avatar can present an effect after reloading and the like according to an avatar control instruction of a main broadcast. And the target user and the non-target user can help the user to make a quick decision after viewing the effect presented by the virtual image, such as shopping adding operation and the like.
In the embodiment, the live broadcast can be any type of live broadcast, such as clothes changing, cosmetics, operation and the like, so that a user can experience the presentation effect intuitively, and the user can make a quick decision conveniently. The virtual image obtained according to the third characteristic information of the target user can be controlled by the anchor, so that the interactive experience with the user in live broadcasting is enhanced, rich interactive modes and multi-dimensional display modes are provided, the live broadcasting effect can be enhanced, and the user can make a quick decision.
According to the live broadcast interaction method of the virtual image, provided by the embodiment of the invention, the target user covering the live broadcast user in the preset range is screened and determined according to the first characteristic information of the user, the personalized virtual image corresponding to the target user is generated, and the homogenization problem is avoided. Furthermore, the virtual image can be synchronously interacted with the user, and the anchor can control the virtual image, so that the interaction diversity in the live broadcast process is realized.
Fig. 3 shows a live interaction method of an avatar according to an embodiment of the present invention, where the method is executed in a server, and as shown in fig. 3, the method includes:
step S301, receiving first characteristic information of a user, and determining a target user corresponding to the characteristic information of the live broadcast user covering a preset range according to the first characteristic information.
Optionally, according to the first feature information, determining a feature weight corresponding to the first feature information by combining with the user historical data;
sorting the first feature information according to the feature weights, and acquiring the first feature information of a preset number which is sorted in advance as the first feature information for screening the target users for inputting to a preset classification model;
and inputting the first characteristic information into a preset classification model, screening to obtain target characteristic information of live users covering a preset range, determining target users matched with the target characteristic information, and returning participation interaction results of target user sides corresponding to the target users after screening.
Further, inputting the first characteristic information into a preset classification model, and analyzing and determining target characteristic information of live broadcast users covering a preset range based on the preset classification model;
and screening the target characteristic information to determine that the user corresponding to the user information with the same target characteristic information is the target user.
And S302, receiving second characteristic information of the target user, and generating an avatar according to the second characteristic information.
Step S303, issuing the virtual image.
Optionally, this embodiment may further include the following steps:
and step S304, continuously receiving the third characteristic information of the target user, and continuously mapping the third characteristic information to the virtual image.
Step S305, receiving an avatar control request triggered by the anchor terminal, continuously obtaining fourth feature information of the anchor, and continuously mapping the fourth feature information to the avatar.
Step S306, receiving the virtual image stop control request triggered by the anchor terminal, stopping the mapping of the fourth characteristic information, continuously receiving the fifth characteristic information of the target user, and continuously mapping the fifth characteristic information to the virtual image.
Optionally, the fourth feature information includes one or more of: clothing, facial point locations, limb point locations, actions; the fifth characteristic information includes one or more of: clothing, facial point locations, limb point locations, actions.
The above steps are described with reference to the corresponding steps in fig. 1, and are not described again here.
Fig. 4 shows a live virtual image interaction method provided in an embodiment of the present invention, where the method is executed at a user side, and as shown in fig. 4, the method includes:
step S401, the first characteristic information of the user is uploaded to a server, so that the server can determine a target user meeting a preset interaction participating condition according to the first characteristic information.
Step S402, if the user is a target user, acquiring and uploading second characteristic information of the user in real time;
optionally, collecting and uploading second characteristic information of the user in real time; and continuously receiving and displaying the virtual image mapped by the third characteristic information.
And S403, receiving and displaying the virtual image, wherein the virtual image is generated according to the second characteristic information.
The above steps are described with reference to the corresponding steps in fig. 1, and are not described again here.
Fig. 5 is a schematic structural diagram illustrating an avatar live interactive system according to an embodiment of the present invention. As shown in fig. 5, the system 500 includes: an anchor terminal 510, a server 520 and a user terminal 530.
The server 520 includes the following modules:
the first receiving module 521 is adapted to receive first feature information of a user, and determine, according to the first feature information, a target user corresponding to the feature information of a live broadcast user covering a preset range;
a second receiving module 522, adapted to receive second feature information of the target user, and generate an avatar according to the second feature information;
the issuing module 523 is adapted to issue the avatar.
Optionally, the first receiving module 521 is further adapted to: and inputting the first characteristic information into a preset classification model, screening to obtain target characteristic information of live broadcast users covering a preset range, determining target users matched with the target characteristic information, and returning participation interaction results which are screened by target user sides corresponding to the target users.
Optionally, the server 520 further comprises:
the feature screening module 524 is adapted to determine a feature weight corresponding to the first feature information according to the first feature information and by combining with the user historical data; sorting the first feature information according to the feature weights, and acquiring the first feature information with the prior sorting in a preset quantity as the first feature information for screening the target users for inputting to a preset classification model;
the first receiving module 521 is further adapted to: inputting the first characteristic information into a preset classification model, and analyzing and determining target characteristic information of live broadcast users covering a preset range based on the preset classification model; and screening the target characteristic information to determine that the user corresponding to the user information with the same target characteristic information is the target user.
Optionally, the server 520 further comprises:
and the third receiving module 525 is adapted to continuously receive the third feature information of the target user and continuously map the third feature information to the avatar.
Optionally, the server 520 further comprises:
a fourth receiving module 526, adapted to receive an avatar control request triggered by the anchor terminal, continuously obtain fourth feature information of the anchor, and continuously map the fourth feature information to the avatar; and receiving an avatar stop control request triggered by the anchor terminal, stopping mapping of the fourth characteristic information, continuously receiving fifth characteristic information of the target user, and continuously mapping the fifth characteristic information to the avatar.
Optionally, the fourth feature information includes one or more of: clothing, facial point locations, limb point locations, actions; the fifth characteristic information includes one or more of: clothing, facial point locations, limb point locations, actions.
The client 530 includes the following modules:
the first uploading module 531 is adapted to upload first feature information of a user to a server, so that the server determines a target user meeting a preset interaction participating condition according to the first feature information;
the second uploading module 532 is suitable for acquiring and uploading second characteristic information of the user in real time if the user is a target user;
the receive avatar module 533 is adapted to receive and display an avatar, wherein the avatar is generated according to the second feature information.
Optionally, the user terminal 530 further includes: the third uploading module 534 is suitable for acquiring and uploading third characteristic information of the user in real time; and continuously receiving and displaying the virtual image mapped by the third characteristic information.
The descriptions of the modules refer to the corresponding descriptions in the method embodiments, and are not repeated herein.
The embodiment of the invention also provides a nonvolatile computer storage medium, and the computer storage medium stores at least one executable instruction which can execute the live virtual image interaction method in any method embodiment.
Fig. 6 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and a specific embodiment of the present invention does not limit a specific implementation of the computing device.
As shown in fig. 6, the computing device may include: a processor (processor) 602, a communication Interface 604, a memory 606, and a communication bus 608.
The method is characterized in that:
the processor 602, communication interface 604, and memory 606 communicate with one another via a communication bus 608.
A communication interface 604 for communicating with network elements of other devices, such as clients or other servers.
The processor 602 is configured to execute the program 610, and may specifically execute the relevant steps in the above embodiment of the avatar live broadcast interaction method.
In particular, program 610 may include program code comprising computer operating instructions.
The processor 602 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 606 stores a program 610. Memory 606 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 610 may specifically be configured to cause the processor 602 to execute the avatar live interaction method in any of the above-described method embodiments. For specific implementation of each step in the program 610, reference may be made to corresponding steps and corresponding descriptions in units in the above-described avatar live broadcast interaction embodiment, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of embodiments of the present invention as described herein, and any descriptions of specific languages are provided above to disclose preferred embodiments of the invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore, may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components according to embodiments of the present invention. Embodiments of the invention may also be implemented as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing embodiments of the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Embodiments of the invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (10)

1. A live interaction method of an avatar is characterized in that the method is executed in a server and comprises the following steps:
receiving first characteristic information of a user, and determining a target user corresponding to the characteristic information of a live broadcast user covering a preset range according to the first characteristic information;
receiving second characteristic information of a target user, and generating an avatar according to the second characteristic information;
and issuing the virtual image.
2. The method of claim 1, wherein the receiving first feature information of a user and determining a target user corresponding to the feature information of a live user covering a preset range according to the first feature information further comprises:
and inputting the first characteristic information into a preset classification model, screening to obtain target characteristic information of live broadcast users covering a preset range, determining target users matched with the target characteristic information, and returning participation interaction results which are screened by target user sides corresponding to the target users.
3. The method according to claim 2, wherein before the inputting the first feature information into a preset classification model, obtaining target feature information of live users covering a preset range by screening, determining target users matched with the target feature information, and returning a result of participation in interaction that a target user side corresponding to the target user passes the screening, the method further comprises:
determining a feature weight corresponding to the first feature information according to the first feature information and by combining with user historical data;
sorting the first feature information according to the feature weights, and acquiring the first feature information of a preset number which is sorted in advance as first feature information for screening target users for inputting the first feature information into a preset classification model;
inputting the first feature information into a preset classification model, screening to obtain target feature information of live users covering a preset range, and determining target users matched with the target feature information further comprises:
inputting the first characteristic information into a preset classification model, and analyzing and determining target characteristic information of a live user covering a preset range based on the preset classification model;
and screening the target characteristic information to determine that the user corresponding to the user information with the same target characteristic information is the target user.
4. The method according to any one of claims 1-3, wherein after said issuing of said avatar, further comprising:
and continuously receiving the third characteristic information of the target user, and continuously mapping the third characteristic information to the virtual image.
5. The method of claim 4, wherein after said transmitting the avatar, further comprising:
receiving an avatar control request triggered by a anchor terminal, continuously acquiring fourth feature information of the anchor, and continuously mapping the fourth feature information to an avatar;
and receiving an avatar stop control request triggered by the anchor terminal, stopping mapping of the fourth characteristic information, continuously receiving fifth characteristic information of the target user, and continuously mapping the fifth characteristic information to the avatar.
6. The method of claim 5, wherein the fourth feature information comprises one or more of: clothing, facial point locations, limb point locations, actions;
the fifth feature information includes one or more of: clothing, facial point locations, limb point locations, actions.
7. A live broadcast interaction method of an avatar is characterized in that the method is executed at a user side and comprises the following steps:
uploading first characteristic information of a user to a server, so that the server determines a target user meeting a preset interaction participation condition according to the first characteristic information;
if the user is a target user, acquiring and uploading second characteristic information of the user in real time;
and receiving and displaying the virtual image, wherein the virtual image is generated according to the second characteristic information.
8. The method of claim 7, wherein if the user is a target user, after receiving and presenting the avatar, further comprising:
collecting and uploading third characteristic information of the user in real time;
and continuously receiving and displaying the virtual image mapped by the third characteristic information.
9. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction which causes the processor to execute the operation corresponding to the avatar live broadcast interaction method according to any one of claims 1-6;
alternatively, the executable instructions cause the processor to perform operations corresponding to the avatar live interaction method of claim 7 or 8.
10. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the avatar live interaction method of any of claims 1-6;
alternatively, the executable instructions cause the processor to perform operations corresponding to the avatar live interaction method of claim 7 or 8.
CN202210988979.2A 2022-08-17 2022-08-17 Live broadcast interaction method of virtual image, computing equipment and storage medium Pending CN115379269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210988979.2A CN115379269A (en) 2022-08-17 2022-08-17 Live broadcast interaction method of virtual image, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210988979.2A CN115379269A (en) 2022-08-17 2022-08-17 Live broadcast interaction method of virtual image, computing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115379269A true CN115379269A (en) 2022-11-22

Family

ID=84066679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210988979.2A Pending CN115379269A (en) 2022-08-17 2022-08-17 Live broadcast interaction method of virtual image, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115379269A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704097A (en) * 2023-06-07 2023-09-05 好易购家庭购物有限公司 Digitized human figure design method based on human body posture consistency and texture mapping

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355449A (en) * 2016-08-31 2017-01-25 腾讯科技(深圳)有限公司 User selecting method and device
CN107730311A (en) * 2017-09-29 2018-02-23 北京小度信息科技有限公司 A kind of method for pushing of recommendation information, device and server
CN108182600A (en) * 2017-12-27 2018-06-19 北京奇虎科技有限公司 A kind of method and system that extending user is determined according to weighted calculation
CN109874021A (en) * 2017-12-04 2019-06-11 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus and system
KR20210060196A (en) * 2019-11-18 2021-05-26 주식회사 케이티 Server, method and user device for providing avatar message service

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355449A (en) * 2016-08-31 2017-01-25 腾讯科技(深圳)有限公司 User selecting method and device
CN107730311A (en) * 2017-09-29 2018-02-23 北京小度信息科技有限公司 A kind of method for pushing of recommendation information, device and server
CN109874021A (en) * 2017-12-04 2019-06-11 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus and system
CN108182600A (en) * 2017-12-27 2018-06-19 北京奇虎科技有限公司 A kind of method and system that extending user is determined according to weighted calculation
KR20210060196A (en) * 2019-11-18 2021-05-26 주식회사 케이티 Server, method and user device for providing avatar message service

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704097A (en) * 2023-06-07 2023-09-05 好易购家庭购物有限公司 Digitized human figure design method based on human body posture consistency and texture mapping
CN116704097B (en) * 2023-06-07 2024-03-26 好易购家庭购物有限公司 Digitized human figure design method based on human body posture consistency and texture mapping

Similar Documents

Publication Publication Date Title
CN108875633B (en) Expression detection and expression driving method, device and system and storage medium
JP6684883B2 (en) Method and system for providing camera effects
US11782272B2 (en) Virtual reality interaction method, device and system
US9064023B2 (en) Providing web content in the context of a virtual environment
CN111414225B (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
KR102002863B1 (en) Method and system for creating avatar of animal shape using human face
CN116457829A (en) Personalized avatar real-time motion capture
CN107257338A (en) media data processing method, device and storage medium
CN116508063A (en) Body animation sharing and remixing
JP2003141049A (en) Web site for displaying image simultaneously with interactive chatting
CN110703913A (en) Object interaction method and device, storage medium and electronic device
CN113178015A (en) House source interaction method and device, electronic equipment and storage medium
CN111768478B (en) Image synthesis method and device, storage medium and electronic equipment
CN113840049A (en) Image processing method, video flow scene switching method, device, equipment and medium
KR20220026609A (en) Method and system for recommendation using content with location data
JP2023071712A (en) System for animated cartoon distribution, method, and program
US20160259512A1 (en) Information processing apparatus, information processing method, and program
CN115857704A (en) Exhibition system based on metauniverse, interaction method and electronic equipment
CN112188223B (en) Live video playing method, device, equipment and medium
CN108629296A (en) Image processing method and device
CN115379269A (en) Live broadcast interaction method of virtual image, computing equipment and storage medium
CN114116086A (en) Page editing method, device, equipment and storage medium
CN113938696B (en) Live broadcast interaction method and system based on custom virtual gift and computer equipment
JP7244450B2 (en) Computer program, server device, terminal device, and method
CN117579885A (en) Special effect display method and system for live broadcasting room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination