CN112637622A - Live broadcasting singing method, device, equipment and medium - Google Patents

Live broadcasting singing method, device, equipment and medium Download PDF

Info

Publication number
CN112637622A
CN112637622A CN202011460147.0A CN202011460147A CN112637622A CN 112637622 A CN112637622 A CN 112637622A CN 202011460147 A CN202011460147 A CN 202011460147A CN 112637622 A CN112637622 A CN 112637622A
Authority
CN
China
Prior art keywords
singing
song
virtual object
action
video content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011460147.0A
Other languages
Chinese (zh)
Inventor
杨沐
王骁玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011460147.0A priority Critical patent/CN112637622A/en
Publication of CN112637622A publication Critical patent/CN112637622A/en
Priority to PCT/CN2021/128073 priority patent/WO2022121558A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4758End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for providing answers, e.g. voting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure relates to a live broadcasting singing method, a live broadcasting singing device, equipment and a medium, wherein the method comprises the following steps: displaying a live broadcast room page of the virtual object, and playing singing video content of the virtual object on the live broadcast room page of the virtual object; in the process of playing the virtual object singing the video content, the view angle of the singing video content and/or the motion of the virtual object are/is switched along with the change of the attribute characteristics of the singing song. By adopting the technical scheme, in the process of singing the song by the virtual object in a live broadcast manner, the action of the virtual object and/or the picture visual angle of the singing video content can be automatically changed based on the song, the singing video content of the virtual anchor broadcast is matched with the singing song, the relevance is higher, the effect of singing the song by the virtual object in a live broadcast manner is better, the diversity and the interest of the virtual object display are improved, and the experience effect of the user in the process of singing the song by the virtual object in a live broadcast manner is further improved.

Description

Live broadcasting singing method, device, equipment and medium
Technical Field
The present disclosure relates to the field of live broadcasting technologies, and in particular, to a live broadcasting singing method, apparatus, device, and medium.
Background
With the continuous development of live broadcast technology, live broadcast watching becomes an important entertainment activity in the life of people.
Currently, in the live broadcasting process, a main broadcasting station can sing songs according to the selection of a user. However, generally, the singing picture of the anchor and the song do not match, and the relevance is low.
Disclosure of Invention
To solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a live broadcasting singing method, apparatus, device, and medium.
The embodiment of the disclosure provides a live broadcasting singing method, which comprises the following steps:
displaying a live broadcast room page of a virtual object, and playing singing video content of the virtual object on the live broadcast room page;
and in the process of playing the singing video content, the picture visual angle of the singing video content and/or the action of the virtual object are/is switched along with the change of the attribute characteristics of the singing song.
The embodiment of the present disclosure further provides a live broadcasting singing method, where the method includes:
determining a singing song of the virtual object;
determining audio data of the singing song, and action image data and visual angle image data corresponding to the singing song to obtain singing video data;
and sending the singing video data to a terminal so that the terminal switches the picture view angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content of the virtual object based on the singing video data.
The embodiment of the present disclosure further provides a live broadcasting singing device, where the device includes:
the live broadcasting singing module is used for displaying a live broadcasting room page of a virtual object and playing the singing video content of the virtual object on the live broadcasting room page;
and the switching module is used for switching the picture visual angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content.
The embodiment of the present disclosure further provides a live broadcasting singing device, where the device includes:
the song determining module is used for determining the singing song of the virtual object;
the singing video data module is used for determining the audio data of the singing song, and the action image data and the visual angle image data corresponding to the singing song to obtain singing video data;
and the data sending module is used for sending the singing video data to a terminal so that the terminal switches the picture view angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content of the virtual object based on the singing video data.
An embodiment of the present disclosure further provides an electronic device, which includes: a processor; a memory for storing the processor-executable instructions; the processor is used for reading the executable instructions from the memory and executing the instructions to realize the live broadcasting singing method provided by the embodiment of the disclosure.
The embodiment of the present disclosure also provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is used to execute the live singing method provided by the embodiment of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: the live broadcasting singing scheme provided by the embodiment of the disclosure displays a live broadcasting room page of a virtual object, and plays singing video content of the virtual object on the live broadcasting room page of the virtual object; in the process of playing the virtual object singing the video content, the view angle of the singing video content and/or the motion of the virtual object are/is switched along with the change of the attribute characteristics of the singing song. By adopting the technical scheme, in the process of singing the song by the virtual object in a live broadcast manner, the action of the virtual object and/or the picture visual angle of the singing video content can be automatically changed based on the song, the singing video content of the virtual anchor broadcast is matched with the singing song, the relevance is higher, the effect of singing the song by the virtual object in a live broadcast manner is better, the diversity and the interest of the virtual object display are improved, and the experience effect of the user in the process of singing the song by the virtual object in a live broadcast manner is further improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a live broadcasting singing method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a live singing provided in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of another live performance provided by the embodiment of the present disclosure;
fig. 4 is a schematic diagram of another live singing provided by the embodiment of the present disclosure;
fig. 5 is a schematic diagram of a song-selecting panel according to an embodiment of the disclosure;
fig. 6 is a schematic flow chart of another live singing method provided in the embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a live broadcasting singing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of another live broadcasting singing apparatus provided in the embodiment of the present disclosure
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a schematic flow chart of a live broadcasting singing method provided in an embodiment of the present disclosure, where the method may be executed by a live broadcasting singing apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method is applied to a plurality of viewer terminals entering a live room of a virtual object, and comprises:
step 101, displaying a live broadcast room page of the virtual object, and playing the singing video content of the virtual object on the live broadcast room page.
And 102, in the process of playing the singing video content, switching the view angle of the image of the singing video content and/or the motion of the virtual object along with the change of the attribute characteristics of the singing song.
The virtual object may be a three-dimensional model created in advance based on Artificial Intelligence (AI) technology, a controllable digital object may be set for the computer, and the limb motion and the face information of the real person may be acquired through the motion capture device and the face capture device to drive the virtual object. The specific types of the virtual objects can be various, different virtual objects can have different appearances, and the virtual objects can be virtual animals or virtual persons of different styles. In the embodiment of the disclosure, through the combination of the artificial intelligence technology and the video live broadcast technology, the virtual object can replace a real person to realize video live broadcast.
The live broadcast room page is a page for displaying a live broadcast room, and the page can be a webpage or a page in an application program client. The singing video content refers to video content which is generated according to the singing video data and used for playing. The image view angle represents a view angle when different lenses shoot an image of the singing video content, the lenses can comprise a static lens and a dynamic lens, and the static lens comprises at least one of a far lens, a near lens, a panoramic lens, a bent lens and the like. The static lens refers to a lens with a fixed position, the dynamic lens refers to a movable lens, and a dynamic picture can be obtained by shooting through the movement of the lens, for example, the dynamic lens can include a surrounding lens, a track lens and the like.
In the embodiment of the present disclosure, playing the singing video content of the virtual object on the page of the live broadcast room may include: receiving singing video data of the virtual object, wherein the singing video data comprise audio data of a singing song and action image data and/or view angle image data corresponding to the singing song, and the action of the virtual object corresponding to the action image data and the picture view angle corresponding to the view angle image data are matched with the attribute characteristics of the audio data; and generating singing video content of the virtual object based on the singing video data of the singing song and playing the singing video content.
The singing video data can be understood as data for realizing live broadcasting of a virtual object, and specifically, the data is configured for singing songs in advance in the server. The video data may include a series of data corresponding to a song to be performed, and specifically may include audio data of the song to be performed, and motion image data and/or perspective image data corresponding to the song to be performed. The action of the virtual object corresponding to the action image data, the image visual angle corresponding to the visual angle image data are matched with the attribute characteristics of the audio data, the attribute characteristics matched with the action of the virtual object and the attribute characteristics matched with the image visual angle can be the same or different, and the setting is specifically carried out according to the actual situation. The audio data of the singing song refers to the pre-recorded song audio corresponding to the singing song, and can be recorded for a real person or artificially synthesized according to the tone of a virtual object.
The motion image data may comprise picture data of a virtual object performing a plurality of consecutive motions, i.e. the motion image data may comprise a plurality of motion pictures describing one or more limb motions and/or expression motions of the virtual object, constituting a set of motion images. In the embodiment of the disclosure, a plurality of types of motion image data can be preset for a virtual object, and each song can be provided with corresponding motion image data according to the song type, for example, for a song with the song type of an ancient wind, softer motion image data can be corresponding; while for songs whose song type is rock, the tempo is heavier, and may be for motion image data that is more rock.
The view image data may include motion images at different view angles, the view angles may be view angles when the virtual object is shot by different lenses, the display information corresponding to the different view angles is different, and the display information may include a display size and/or a display direction of the motion image. For example, when the screen angle of view is switched from the angle of view of the far lens to the angle of view of the near lens, the display size of the motion image is enlarged from small, and when the screen angle of view is switched from the left lens to the angle of view of the right lens, the display direction of the motion image is switched from the left side to the right side.
Specifically, the terminal can display a live broadcast room page of the virtual object based on the triggering operation of the audience in the live broadcast application program, receive the singing video data sent by the server, generate the singing video content of the virtual object by decoding the singing video data, and play the singing video content in the live broadcast room page. In the process of playing the singing video content, the view angle of the singing video content can be switched along with the change of the attribute characteristics of the singing song, and/or the action of the virtual object is switched along with the change of the attribute characteristics of the singing song. The singing song may be a preset song, or a song selected by the user at a historical time, and is not limited in particular.
In the embodiment of the present disclosure, the live singing method may further include: in response to the singing video content switching from the first screen perspective to the second screen perspective, the motion image of the virtual object is adjusted based on the second screen perspective. The singing song is associated with at least one view identifier and at least one action identifier, specifically, the singing song is associated with the view identifier and the action identifier based on a time stamp of the singing song, and the time stamp associated with the action identifier and the time stamp associated with the view identifier may be the same or different. The visual angle identification corresponds to at least one picture visual angle, the action identification corresponds to at least one group of actions, and the visual angle identification and the action identification are generated based on the attribute characteristics of the singing song. The first picture viewing angle and the second picture viewing angle are viewing angle identifiers of two different picture viewing angles.
In the process of playing the singing video content, if the singing song is played from the timestamp corresponding to the first visual angle identification to the timestamp corresponding to the second visual angle identification, responding to the fact that the singing video content is switched from the first visual angle to the second visual angle, and based on the display information corresponding to the second visual angle, the action image corresponding to the timestamp corresponding to the second visual angle identification can be adjusted, and the adjusted action image is displayed.
In the embodiment of the present disclosure, the attribute feature may include at least one of a rhythm, a melody, a duration, and the like, a view angle of a singing video content, and/or a change of an action of the virtual object with a change of the attribute feature of the singing song, including: and in response to the rhythm change, melody change and/or duration change of the singing song, the singing video content is switched from the third picture visual angle to the fourth picture visual angle, and/or the action of the virtual object is switched from the first action to the second action, wherein the action of the virtual object comprises an expression action and a limb action.
The third picture visual angle and the fourth picture visual angle are used for generally indicating different picture visual angles corresponding to visual angle identifications associated with the singing song, namely, different picture visual angles can be switched along with the change of attribute characteristics of the singing song for singing video content. For example, when a singing song is associated with a view angle identifier, which corresponds to the picture view angle of the surround lens, and the singing song is played to a time point corresponding to the view angle identifier, the picture view angle of the virtual object can be switched to be the picture view angle under the environmental lens, that is, the virtual machine position performs a surround behavior, so that the surround display of the virtual object is realized. The first action and the second action are also used for generally indicating actions of different virtual objects corresponding to action identifications associated with the singing song, and the virtual objects can switch different actions along with the change of attribute characteristics of the singing song.
Exemplarily, fig. 2 is a schematic diagram of a live broadcasting singing provided by the embodiment of the present disclosure, as shown in fig. 2, a live broadcasting room page of a virtual object 11 is shown in the diagram, a live broadcasting picture in the live broadcasting singing process of the virtual object 11 is shown in the live broadcasting room page, an action of the virtual object 11 is that two arms are unfolded, a picture view angle is a view angle of a front lens, and a microphone in a scene is also shown in front of the virtual object 11. The upper left corner of the live room page in fig. 2 also shows the avatar and name of the virtual object 11, named "small a", and the focus button 12.
Fig. 3 is a schematic view of another live broadcasting singing provided by the embodiment of the disclosure, and compared with fig. 2, in fig. 3, the action of the virtual object 11 in fig. 3 is still the two arms are unfolded without changing, the view angle of the picture is the view angle of the left lens, the display size of the virtual object 11 under the display lens is smaller than that of fig. 2, the display direction is also changed, and the display direction and the display size of the microphone in the drawing are also changed. Fig. 4 is a schematic diagram of another live broadcasting singing provided by the embodiment of the disclosure, and compared with fig. 2, in fig. 4, the action of the virtual object 11 is changed, the action is changed to be two arms downward, the view angle of the screen is the view angle of the rear lens, the display size of the virtual object 11 under the display lens is larger than that of fig. 2, the display direction is also changed, and the display direction and the display size of the microphone in the drawing are also changed correspondingly.
Based on the schematic diagrams of the live singing in fig. 2, 3 and 4, the changes of the motion and the view angle of the picture of the same virtual object 11 in the process of live singing the same singing song are shown. The above is only an example, and in the process of actual live broadcasting singing, the change of the action and the switching of the view angle of the picture may be various in the process of live broadcasting singing a song by the virtual object, and the details are not limited.
The live broadcasting singing scheme provided by the embodiment of the disclosure displays a live broadcasting room page of a virtual object, and plays singing video content of the virtual object on the live broadcasting room page of the virtual object; in the process of playing the virtual object singing the video content, the view angle of the singing video content and/or the motion of the virtual object are/is switched along with the change of the attribute characteristics of the singing song. By adopting the technical scheme, in the process of singing the song by the virtual object in a live broadcast manner, the action of the virtual object and/or the picture visual angle of the singing video content can be automatically changed based on the song, the singing video content of the virtual anchor broadcast is matched with the singing song, the relevance is higher, the effect of singing the song by the virtual object in a live broadcast manner is better, the diversity and the interest of the virtual object display are improved, and the experience effect of the user in the process of singing the song by the virtual object in a live broadcast manner is further improved.
In some embodiments, the live singing method may further include: a song requesting panel is displayed on a live broadcast room page, wherein the song requesting panel comprises interactive information of at least one song; and receiving the triggering operation of the user on the target song, and updating the interactive information of the target song, wherein the target song is a song in any one of the song-on-demand panels.
The song requesting panel can be an interface which is arranged on a live broadcast room page of a virtual object and used for supporting a user to request songs, the song requesting panel can comprise interactive information of at least one song, and the interactive information of the song can be voting quantity based on user triggering. Optionally, song-on-demand information of the song may be further displayed in the song-on-demand panel, where the song-on-demand information refers to related information of the song, and for example, the song-on-demand information may include at least one of information of a song name, a song cover, song duration, and the like.
Specifically, after receiving the triggering operation of the user on the preset song-ordering key or the song-ordering prompt message, the song-ordering panel can be displayed for the user on the live broadcast room page, the triggering operation of the user on any song in the song-ordering panel is received, the song is a target song, the number of votes of the target song in the song-ordering panel is increased by the number corresponding to the triggering operation, the updated number of votes is displayed, and the updated interactive information is displayed. For example, if the user triggers a song in the song-on-demand panel twice, the number of original votes of the song is 2, the number of original votes is increased by 2, and the updated number of votes is shown as 4. The trigger operation may be various operations, for example, the trigger operation may be a click operation or a double click operation.
Referring to fig. 2, a song-on-demand button 14 is shown in the live broadcast page of the virtual object 11 in fig. 2, and after receiving the trigger operation of the user on the song-on-demand button 11, a song-on-demand panel can be shown to the user. Exemplarily, fig. 5 is a schematic diagram of a song requesting panel provided by the embodiment of the present disclosure, as shown in fig. 5, the song requesting panel 17 includes song requesting information and the number of votes of 5 songs, the numbers of votes of song 2 and song 4 are the same and are 5 votes, the number of votes of song 5 is the highest, a song cover of each song may be personalized according to the song in advance, and the song covers of the songs in fig. 5 are different. The information of the song 6 being played is also displayed below the song-on-demand panel 17, and no one votes for the song 6.
In the scheme, through the arrangement of the song ordering panel, the information of the songs sung by the virtual object in a live broadcast manner can be displayed, the voting of the user for the songs is supported, the voting number is displayed for the user, the user can know the current song voting information in real time, and the interaction effect of the virtual object is improved.
In some embodiments, the singing song is determined based on the amount of interaction information of at least one song, and the live broadcasting singing method may further include: receiving a song list, wherein the song list comprises song information of a plurality of songs to be performed, and the plurality of songs to be performed are determined based on the quantity of the interaction information of at least one song; and displaying the song list on the song ordering panel.
The singing song can be a song with the highest quantity of interactive information in the song ordering panel, namely a song with the highest user voting quantity, the singing song can be continuously updated along with time, after the virtual object performs live singing on one singing song, live singing of the next singing song can be performed, and optionally, live broadcasting prompt information of the next singing song can be displayed in the song ordering panel.
Illustratively, referring to fig. 5, a live broadcasting prompt message of "- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -. It will be appreciated that when no user votes for a song, a default song may be set to sing, for example, song 6 shown in fig. 5, which is the singing song when no one votes for.
The user can select the song that the virtual object hopes to sing through the vote to the song in the song requesting panel, if the voting quantity of a song is the highest, the virtual object can live broadcast and sing the song, the virtual object can live broadcast and sing according to the selection of the user who watches live broadcast, the interactive diversity of live broadcast and sing has further been optimized, make user's interactive experience more excellent.
The song list is a song list in a live song library which is preset for the virtual object, the live song library can comprise a plurality of song lists, and each song list can comprise song information of a plurality of songs to be sung. The plurality of songs to be sung can be determined based on the quantity of the interactive information of at least one song, specifically, a set quantity of songs with the former quantity of the interactive information can be determined as the songs to be sung, the set quantity is the quantity of the songs to be sung, and the set quantity can be set according to actual conditions. The plurality of songs to be sung in the song list can also be configured in a self-defining way according to the types of the songs. Optionally, each song list is pre-configured with live broadcast information such as live broadcast time, live broadcast sequence, live broadcast times and the like, the live broadcast time may include live broadcast start time and live broadcast end time pre-configured for the song list, the live broadcast sequence refers to the sequence of live broadcast of each song in the song list, and the live broadcast times refers to the times of live broadcast singing of the song list by the virtual object. For example, for an early song list, the set live time may be 8-10 am, and for a evening song list, the set live time may be 8-10 pm.
In this embodiment, the terminal may receive a song list including song information of a plurality of songs to be sung sent by the server, and display the song list in a song ordering panel. The terminal can sequentially receive singing video data of each song to be sung in the singing list in the live broadcasting time of the singing list, the specific receiving sequence is a live broadcasting sequence preset for the singing list, and singing video contents of virtual objects are generated and played based on the singing video data of each song to be sung, namely the virtual objects can sequentially sing each song to be sung according to the live broadcasting sequence of the singing list.
In the above scheme, through setting up the song list for the virtual object in advance, can carry out the live broadcast singing of virtual object in the live broadcast time that corresponds, accord with the scene of song more to can satisfy the live broadcast of user at different times and watch the demand, further improve user's live broadcast experience effect.
In some embodiments, the live singing method may further include: displaying interactive information from a plurality of audiences on a live broadcast room page; responding to the interactive information and/or the singing song meeting the preset condition, and playing the reply multimedia content of the virtual object which replies aiming at the interactive information on the page of the live broadcast room.
The interactive information refers to interactive texts sent by a plurality of audiences watching the live broadcast of the virtual object. The terminal can receive the interactive information from a plurality of audiences, display the interactive information on the live broadcast room page of the virtual object and send the interactive information to the server. And if the server determines that the number of the interactive information including the preset keywords reaches a preset threshold value and/or determines that the number of the historical singing songs reaches the number threshold value or the duration of the singing songs reaches the preset duration, determining that the preset conditions are met, and sending the reply multimedia data determined based on the interactive information to the terminal. The terminal receives the reply multimedia data, can generate reply multimedia content based on the reply multimedia data, and plays the reply multimedia content replied by the virtual object aiming at the interactive information on the live broadcast room page.
Illustratively, referring to fig. 2, the lower part of the live room page shows interactive information sent by different users watching live singing, such as "how do you not sing yet" sent by user a, "how you are good" sent by user B, and "i come to find you" sent by user C in the figure. The bottom of the live broadcast room page also shows an editing area 13 for sending interactive information by the current user and other function keys, such as a song-ordering key 14, an interactive key 15, an activity and reward key 16 and the like in the figure, wherein different function keys have different functions.
In the above scheme, when the scene switching condition is determined to be met based on the interactive information and/or the singing song, the virtual object can be switched from live singing to live chatting, the interactive information of audiences is replied, the switching of two live scenes of the virtual object is realized, various interactive requirements are met, and the live diversity of the virtual object is improved.
Fig. 6 is a schematic flow chart of another live singing method provided in the embodiment of the present disclosure; the method may be performed by a live performance apparatus, wherein the apparatus may be implemented in software and/or hardware, and may generally be integrated in an electronic device. As shown in fig. 6, the method is applied to a server of a virtual object, and includes:
step 201, determining a singing song of the virtual object.
In the embodiment of the present disclosure, determining a singing song of a virtual object includes: and receiving the interactive information of at least one song, and determining the singing song according to the quantity of the interactive information of at least one song.
The interactive information of the song can be information displayed on a song ordering panel on the terminal so as to be based on the voting number triggered by the user. The server side can obtain the interactive information of a plurality of songs in the song-selecting panel, determine the interactive information quantity of the songs, and determine the song with the highest interactive information quantity as the singing song, namely determine the song with the highest user voting quantity as the singing song.
Optionally, the live singing method may further include: determining a plurality of songs to be sung based on the quantity of the interactive information of at least one song; and generating a song list based on the song information of the plurality of songs to be sung and sending the song list to the terminal so that the terminal can display the song list on a song ordering panel.
Specifically, the server side can determine a set number of songs with the interaction information quantity ahead as the songs to be sung, the set number is the number of the songs with singing, and the set number can be set according to actual conditions. The method comprises the steps of generating a song list based on song information of a plurality of songs to be sung, sending the song list to a terminal, receiving the song list which comprises the song information of the plurality of songs to be sung and is sent by a server side through the terminal, and displaying the song list in a song ordering panel. The server can send the singing video data of each song to be sung to the terminal at one time in the live broadcasting time of the song list, the specific sending sequence is a live broadcasting sequence preset for the song list, and after the terminal receives the singing video data, the singing video content of the virtual object is generated based on the singing video data of each song to be sung to be played, namely the virtual object can sequentially sing each song to be sung according to the live broadcasting sequence of the song list.
Step 202, determining audio data of a singing song, and motion image data and view angle image data corresponding to the singing song to obtain singing video data.
The video data may include a series of data corresponding to a song to be performed, and specifically may include audio data of the song to be performed, and motion image data and/or perspective image data corresponding to the song to be performed. The action of the virtual object corresponding to the action image data, the image visual angle corresponding to the visual angle image data are matched with the attribute characteristics of the audio data, the attribute characteristics matched with the action of the virtual object and the attribute characteristics matched with the image visual angle can be the same or different, and the setting is specifically carried out according to the actual situation. The audio data of the singing song refers to the pre-recorded song audio corresponding to the singing song and can be recorded and obtained for a real person.
Specifically, by searching a preset database, the audio data of the singing song, and the action image data and the view angle image data corresponding to the singing song can be determined to obtain the singing video data.
In the embodiment of the present disclosure, the live singing method may further include: and matching the corresponding action image data and view angle image data based on the attribute characteristics of the audio data, wherein the action of the virtual object corresponding to the action image data, the picture view angle corresponding to the view angle image data and the attribute characteristics of the audio data are matched, and the attribute characteristics comprise at least one of rhythm, melody and duration.
The motion image data may comprise picture data of a virtual object performing a plurality of consecutive motions, i.e. the motion image data may comprise a plurality of motion pictures describing one or more limb motions and/or expression motions of the virtual object, constituting a set of motion images. The view image data may include motion images at different view angles, the view angles may be view angles when the virtual object is shot by different lenses, the display information corresponding to the different view angles is different, and the display information may include a display size and/or a display direction of the motion image.
Optionally, matching the corresponding motion image data and perspective image data based on the attribute features of the audio data may include: setting at least one action identifier and/or at least one view identifier in a playing time axis of the singing song based on the attribute characteristics of the audio data; and respectively matching the action image data of the audio clips corresponding to the action identifications and/or matching the picture visual angles corresponding to the visual angle identifications, wherein the action image data comprise action images of at least one group of actions performed by the virtual object. The attribute feature may include at least one of rhythm, melody, duration, and the like.
At least one action identifier and/or at least one view identifier can be set in a playing time axis of the singing song based on the attribute characteristics of the audio data of the singing song, and a time stamp associated with the action identifier and a time stamp associated with the view identifier can be the same or different; and matching the action image data of the audio clip corresponding to each action identifier, namely matching at least one group of actions corresponding to the action identifiers, and matching the picture visual angle corresponding to each visual angle identifier.
In the embodiment of the present disclosure, the live singing method may further include: determining target display information of the picture visual angle corresponding to the visual angle identification based on a corresponding relation between the picture visual angle and the display information which is constructed in advance, wherein the display information comprises the display size and/or the display direction of the action image; and adjusting the action image of the virtual object based on the target display information to obtain visual angle image data of the picture visual angle corresponding to the visual angle identification.
After the visual angle identification is set based on the attribute characteristics of the singing song and the corresponding picture visual angle is matched, the target display information corresponding to each picture visual angle can be determined, the action image corresponding to the timestamp where the visual angle identification is located is adjusted based on the target display information, and visual angle image data corresponding to the visual angle identification is obtained.
Optionally, the motion image data corresponding to the singing song is determined, and matching may also be performed based on the song type of the singing song. The song type can be determined based on the melody of the singing song, and corresponding action image data is obtained based on song type matching. The song types may include a variety, for example, the song types may include rock, pop, antique, and modern types. In the embodiment of the present disclosure, each song type may correspond to one motion image data, that is, each song type corresponds to a set of motions of the virtual object. For example, for a song with an ancient style, softer motion data may be corresponded; while for songs whose song type is rock, the tempo is heavier, and may be for more rock motion data.
In the above scheme, the action image data and/or the view angle image data of the singing song are/is matched in advance, so that the matching relation can be stored in the database, the singing video data of the singing song of the virtual object can be quickly found, and the live broadcasting singing efficiency of the virtual object is improved.
And step 203, sending the singing video data to the terminal so that the terminal switches the view angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content of the virtual object based on the singing video data.
After the server determines the singing video singing data of the singing song of the virtual object, the singing video data can be sent to the terminal, so that the terminal generates corresponding singing video content based on the singing video data, the singing video content of the virtual object is played on a live broadcast room page, and in the process of playing the singing video content of the virtual object, the picture view angle of the singing video content and/or the action of the virtual object are/is switched along with the change of the attribute characteristics of the singing song.
In the embodiment of the present disclosure, the live singing method may further include: receiving interactive information from a plurality of viewers; and if the preset conditions are determined to be met based on the interactive information and/or the singing song, generating reply multimedia data based on the interactive information and sending the reply multimedia data to the terminal, so that the terminal plays the reply multimedia content replied by the virtual object aiming at the interactive information on the basis of the reply multimedia data in the live broadcasting room page.
The interactive information refers to interactive texts sent by a plurality of audiences watching the live broadcast of the virtual object. The terminal can receive the interactive information from a plurality of audiences, display the interactive information on the live broadcast room page of the virtual object and send the interactive information to the server. And if the server determines that the number of the interactive information including the preset keywords reaches a preset threshold value and/or determines that the number of the historical singing songs reaches the number threshold value or the duration of the singing songs reaches the preset duration, determining that the preset conditions are met, and sending the reply multimedia data determined based on the interactive information to the terminal. The terminal receives the reply multimedia data, can generate reply multimedia content based on the reply multimedia data, and plays the reply multimedia content replied by the virtual object aiming at the interactive information on the live broadcast room page. The advantage that sets up like this has realized the switching of two kinds of live scenes of virtual object, has satisfied multiple interactive demand, has improved the live variety of virtual object.
In the live broadcasting singing scheme provided by the embodiment of the disclosure, a server determines a singing song of a virtual object, determines audio data of the singing song, and action image data and view angle image data corresponding to the singing song to obtain singing video data, and sends the singing video data to a terminal, so that the terminal switches the view angle of a picture of the singing video content and/or the action of the virtual object along with the change of attribute characteristics of the singing song in the process of playing the singing video content of the virtual object based on the singing video data. By adopting the technical scheme, the singing video data of the virtual object comprises the action image data and the view angle image data corresponding to the song, after the singing video data is sent to the user side, in the process of singing the song by the virtual object in a live broadcasting manner, the action of the virtual object and/or the picture view angle of the singing video content can be automatically changed based on the song, the singing video content of the virtual main broadcast is matched with the singing song, the relevance is higher, the effect of singing the song by the virtual object in the live broadcasting manner is better, the diversity and the interest of the virtual object display are improved, and further the experience effect of the user in the process of singing the song by the virtual object in the live broadcasting manner is improved.
Fig. 7 is a schematic structural diagram of a live broadcasting singing apparatus provided in an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 7, the apparatus includes:
a live broadcasting singing module 301, configured to display a live broadcasting room page of a virtual object, and play a singing video content of the virtual object on the live broadcasting room page;
a switching module 302, configured to switch, during the playing of the singing video content, a view angle of the singing video content and/or a motion of the virtual object according to a change of an attribute characteristic of the singing song.
Optionally, the image view angle represents a view angle of different lenses when the image of the singing video content is shot, the lens includes a static lens and a dynamic lens, and the static lens includes at least one of a far lens, a near lens, a panoramic lens, a nodding lens and a nodding lens.
Optionally, the live singing module 301 is specifically configured to:
receiving singing video data of a virtual object, wherein the singing video data comprise audio data of a singing song and action image data and/or view angle image data corresponding to the singing song, and actions of the virtual object corresponding to the action image data and picture views corresponding to the view angle image data are matched with attribute characteristics of the audio data;
and generating the singing video content of the virtual object based on the singing video data of the singing song and playing the singing video content.
Optionally, the apparatus further includes an image adjusting module, configured to:
in response to the singing video content switching from a first screen perspective to a second screen perspective, adjusting a motion image of the virtual object based on the second screen perspective.
Optionally, the singing song is associated with at least one view identifier and at least one action identifier, the view identifier corresponds to at least one picture view, the action identifier corresponds to at least one group of actions, and the view identifier and the action identifier are generated based on an attribute feature of the singing song.
Optionally, the attribute feature includes at least one of a rhythm, a melody, and a duration, and the switching module 302 is specifically configured to:
and responding to the rhythm change, melody change and/or duration change of the singing song, switching the singing video content from a third picture visual angle to a fourth picture visual angle, and/or switching the action of the virtual object from a first action to a second action, wherein the action of the virtual object comprises an expression action and a limb action.
Optionally, the device further comprises a song requesting module, specifically configured to:
a song requesting panel is displayed on the live broadcast room page, wherein the song requesting panel comprises interactive information of at least one song;
and receiving the triggering operation of a user on a target song, and updating the interactive information of the target song, wherein the target song is any song in the song-selecting panel.
Optionally, the singing song is determined based on the amount of the interactive information of the at least one song, and the apparatus further includes a song list module configured to:
receiving a song list, wherein the song list comprises song information of a plurality of songs to be performed, and the plurality of songs to be performed are determined based on the quantity of the interactive information of the at least one song;
and displaying the song list on the song ordering panel.
Optionally, the apparatus further includes a scene switching module, configured to: displaying interactive information from a plurality of audiences on the live broadcast room page;
responding to the interactive information and/or the singing song meeting preset conditions, and playing reply multimedia contents, which are replied by the virtual object aiming at the interactive information, on the live broadcast room page.
The live broadcasting singing device provided by the embodiment of the disclosure can execute the live broadcasting singing method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 8 is a schematic structural diagram of another live singing apparatus provided in the embodiment of the present disclosure, which may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 8, the apparatus includes:
a song determining module 401, configured to determine a singing song of the virtual object;
a singing video data module 402, configured to determine audio data of the singing song, and motion image data and view image data corresponding to the singing song, to obtain singing video data;
a data sending module 403, configured to send the singing video data to a terminal, so that the terminal switches, based on the singing video data, a view angle of the singing video content and/or a motion of the virtual object along with a change of an attribute characteristic of a singing song in a process of playing the singing video content of the virtual object.
Optionally, the song determining module 401 is specifically configured to:
and receiving the interactive information of at least one song, and determining the singing song according to the quantity of the interactive information of the at least one song.
Optionally, the apparatus further includes a song list generating module, configured to:
determining a plurality of songs to be sung based on the quantity of the interaction information of the at least one song;
and generating a song list based on the song information of the plurality of songs to be sung and sending the song list to the terminal so that the terminal displays the song list on a song ordering panel.
Optionally, the apparatus further includes a data matching module, configured to:
and matching corresponding action image data and view angle image data based on the attribute features of the audio data, wherein the action of the virtual object corresponding to the action image data and the picture view angle corresponding to the view angle image data are matched with the attribute features of the audio data, and the attribute features comprise at least one of rhythm, melody and duration.
Optionally, the data matching module is specifically configured to:
setting at least one action identifier and/or at least one view identifier in the playing time axis of the singing song based on the attribute characteristics of the audio data;
and respectively matching the action image data of the audio clips corresponding to the action identifications and/or matching the picture visual angles corresponding to the visual angle identifications, wherein the action image data comprise action images of the virtual object for performing at least one group of actions.
Optionally, the data matching module is specifically configured to:
determining target display information of the picture visual angle corresponding to the visual angle identification based on a corresponding relation between a picture visual angle and display information which are constructed in advance, wherein the display information comprises the display size and/or the display direction of the action image;
and adjusting the action image of the virtual object based on the target display information to obtain visual angle image data of the picture visual angle corresponding to the visual angle identification.
Optionally, the apparatus further includes a reply switching module, configured to:
receiving interactive information from a plurality of viewers;
and if the preset conditions are determined to be met based on the interaction information and/or the singing song, generating reply multimedia data based on the interaction information and sending the reply multimedia data to the terminal, so that the terminal plays reply multimedia contents, which are replied by the virtual object aiming at the interaction information, on a live broadcast room page based on the reply multimedia data.
The live broadcasting singing device provided by the embodiment of the disclosure can execute the live broadcasting singing method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring specifically to fig. 9, a schematic diagram of an electronic device 500 suitable for implementing embodiments of the present disclosure is shown. The electronic device 500 in the disclosed embodiment may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 9 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the live performance method of the embodiment of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: displaying a live broadcast room page of a virtual object, and playing singing video content of the virtual object on the live broadcast room page; and in the process of playing the singing video content, the picture visual angle of the singing video content and/or the action of the virtual object are/is switched along with the change of the attribute characteristics of the singing song.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a singing song of the virtual object; determining audio data of the singing song, and action image data and visual angle image data corresponding to the singing song to obtain singing video data; and sending the singing video data to a terminal so that the terminal switches the picture view angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content of the virtual object based on the singing video data.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, the present disclosure provides a live singing method, including:
displaying a live broadcast room page of a virtual object, and playing singing video content of the virtual object on the live broadcast room page;
and in the process of playing the singing video content, the picture visual angle of the singing video content and/or the action of the virtual object are/is switched along with the change of the attribute characteristics of the singing song.
According to one or more embodiments of the present disclosure, in the live broadcasting singing method provided by the present disclosure, the motion data includes picture data of a plurality of continuous motions performed by the virtual object, and the shot data includes shot control information of at least one display shot.
According to one or more embodiments of the present disclosure, in the live broadcasting singing method provided by the present disclosure, the image view angle represents a view angle when different lenses capture an image of the singing video content, the lens includes a static lens and a dynamic lens, and the static lens includes at least one of a far lens, a near lens, a panoramic lens, a nodding lens, and a nodding lens.
According to one or more embodiments of the present disclosure, in the live broadcasting singing method provided by the present disclosure, playing the singing video content of the virtual object on the page of the live broadcasting room includes:
receiving singing video data of a virtual object, wherein the singing video data comprise audio data of a singing song and action image data and/or view angle image data corresponding to the singing song, and actions of the virtual object corresponding to the action image data and picture views corresponding to the view angle image data are matched with attribute characteristics of the audio data;
and generating the singing video content of the virtual object based on the singing video data of the singing song and playing the singing video content.
According to one or more embodiments of the present disclosure, in a live singing method provided by the present disclosure, the method further includes:
in response to the singing video content switching from a first screen perspective to a second screen perspective, adjusting a motion image of the virtual object based on the second screen perspective.
According to one or more embodiments of the present disclosure, in the live broadcasting singing method provided by the present disclosure, the singing song is associated with at least one view identifier and at least one action identifier, the view identifier corresponds to at least one screen view, the action identifier corresponds to at least one group of actions, and the view identifier and the action identifier are generated based on an attribute feature of the singing song.
According to one or more embodiments of the present disclosure, in the live broadcasting singing method provided by the present disclosure, the attribute feature includes at least one of a rhythm, a melody, and a duration, and the frame view of the singing video content and/or the motion of the virtual object is switched with a change of the attribute feature of the singing song, including:
and responding to the rhythm change, melody change and/or duration change of the singing song, switching the singing video content from a third picture visual angle to a fourth picture visual angle, and/or switching the action of the virtual object from a first action to a second action, wherein the action of the virtual object comprises an expression action and a limb action.
According to one or more embodiments of the present disclosure, in a live singing method provided by the present disclosure, the method further includes:
a song requesting panel is displayed on the live broadcast room page, wherein the song requesting panel comprises interactive information of at least one song;
and receiving the triggering operation of a user on a target song, and updating the interactive information of the target song, wherein the target song is any song in the song-selecting panel.
According to one or more embodiments of the present disclosure, in the live broadcasting singing method provided by the present disclosure, the singing song is determined based on the amount of the interactive information of the at least one song, and the method further includes:
receiving a song list, wherein the song list comprises song information of a plurality of songs to be performed, and the plurality of songs to be performed are determined based on the quantity of the interactive information of the at least one song;
and displaying the song list on the song ordering panel.
According to one or more embodiments of the present disclosure, in a live singing method provided by the present disclosure, the method further includes:
displaying interactive information from a plurality of audiences on the live broadcast room page;
responding to the interactive information and/or the singing song meeting preset conditions, and playing reply multimedia contents, which are replied by the virtual object aiming at the interactive information, on the live broadcast room page.
According to one or more embodiments of the present disclosure, the present disclosure provides a live singing method, including:
determining a singing song of the virtual object;
determining audio data of the singing song, and action image data and visual angle image data corresponding to the singing song to obtain singing video data;
and sending the singing video data to a terminal so that the terminal switches the picture view angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content of the virtual object based on the singing video data.
According to one or more embodiments of the present disclosure, in a live broadcasting singing method provided by the present disclosure, determining a singing song of the virtual object includes:
and receiving the interactive information of at least one song, and determining the singing song according to the quantity of the interactive information of the at least one song.
According to one or more embodiments of the present disclosure, in a live singing method provided by the present disclosure, the method further includes:
determining a plurality of songs to be sung based on the quantity of the interaction information of the at least one song;
and generating a song list based on the song information of the plurality of songs to be sung and sending the song list to the terminal so that the terminal displays the song list on a song ordering panel.
According to one or more embodiments of the present disclosure, in a live singing method provided by the present disclosure, the method further includes:
and matching corresponding action image data and view angle image data based on the attribute features of the audio data, wherein the action of the virtual object corresponding to the action image data and the picture view angle corresponding to the view angle image data are matched with the attribute features of the audio data, and the attribute features comprise at least one of rhythm, melody and duration.
According to one or more embodiments of the present disclosure, in a live broadcasting singing method provided by the present disclosure, matching corresponding action image data and view image data based on attribute features of the audio data includes:
setting at least one action identifier and/or at least one view identifier in the playing time axis of the singing song based on the attribute characteristics of the audio data;
and respectively matching the action image data of the audio clips corresponding to the action identifications and/or matching the picture visual angles corresponding to the visual angle identifications, wherein the action image data comprise action images of the virtual object for performing at least one group of actions.
According to one or more embodiments of the present disclosure, in a live singing method provided by the present disclosure, the method further includes:
determining target display information of the picture visual angle corresponding to the visual angle identification based on a corresponding relation between a picture visual angle and display information which are constructed in advance, wherein the display information comprises the display size and/or the display direction of the action image;
and adjusting the action image of the virtual object based on the target display information to obtain visual angle image data of the picture visual angle corresponding to the visual angle identification.
According to one or more embodiments of the present disclosure, in a live singing method provided by the present disclosure, the method further includes:
receiving interactive information from a plurality of viewers;
and if the preset conditions are determined to be met based on the interaction information and/or the singing song, generating reply multimedia data based on the interaction information and sending the reply multimedia data to the terminal, so that the terminal plays reply multimedia contents, which are replied by the virtual object aiming at the interaction information, on a live broadcast room page based on the reply multimedia data.
According to one or more embodiments of the present disclosure, the present disclosure provides a live performance apparatus, including:
the live broadcasting singing module is used for displaying a live broadcasting room page of a virtual object and playing the singing video content of the virtual object on the live broadcasting room page;
and the switching module is used for switching the picture visual angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content.
According to one or more embodiments of the present disclosure, in the live broadcasting singing apparatus provided by the present disclosure, the image view angle represents a view angle when different lenses capture an image of the singing video content, the lens includes a static lens and a dynamic lens, and the static lens includes at least one of a far lens, a near lens, a panoramic lens, a nodding lens, and a nodding lens.
According to one or more embodiments of the present disclosure, in the live broadcasting singing apparatus provided by the present disclosure, the live broadcasting singing module is specifically configured to:
receiving singing video data of a virtual object, wherein the singing video data comprise audio data of a singing song and action image data and/or view angle image data corresponding to the singing song, and actions of the virtual object corresponding to the action image data and picture views corresponding to the view angle image data are matched with attribute characteristics of the audio data;
and generating the singing video content of the virtual object based on the singing video data of the singing song and playing the singing video content.
According to one or more embodiments of the present disclosure, in a live broadcasting singing apparatus provided by the present disclosure, the apparatus further includes an image adjusting module, configured to:
in response to the singing video content switching from a first screen perspective to a second screen perspective, adjusting a motion image of the virtual object based on the second screen perspective.
According to one or more embodiments of the present disclosure, in a live broadcasting singing apparatus provided by the present disclosure, the singing song is associated with at least one view identifier and at least one action identifier, the view identifier corresponds to at least one screen view, the action identifier corresponds to at least one group of actions, and the view identifier and the action identifier are generated based on an attribute feature of the singing song.
According to one or more embodiments of the present disclosure, in the live broadcasting singing apparatus provided by the present disclosure, the attribute feature includes at least one of a rhythm, a melody, and a duration, and the switching module is configured to:
and responding to the rhythm change, melody change and/or duration change of the singing song, switching the singing video content from a third picture visual angle to a fourth picture visual angle, and/or switching the action of the virtual object from a first action to a second action, wherein the action of the virtual object comprises an expression action and a limb action.
According to one or more embodiments of the present disclosure, in the live broadcasting singing apparatus provided by the present disclosure, the apparatus further includes a song requesting module specifically configured to:
a song requesting panel is displayed on the live broadcast room page, wherein the song requesting panel comprises interactive information of at least one song;
and receiving the triggering operation of a user on a target song, and updating the interactive information of the target song, wherein the target song is any song in the song-selecting panel.
According to one or more embodiments of the present disclosure, in the live broadcasting singing apparatus provided by the present disclosure, the singing song is determined based on the amount of the interactive information of the at least one song, and the apparatus further includes a song list module configured to:
receiving a song list, wherein the song list comprises song information of a plurality of songs to be performed, and the plurality of songs to be performed are determined based on the quantity of the interactive information of the at least one song;
and displaying the song list on the song ordering panel.
According to one or more embodiments of the present disclosure, in the live broadcasting singing apparatus provided by the present disclosure, the apparatus further includes a scene switching module, configured to: displaying interactive information from a plurality of audiences on the live broadcast room page;
responding to the interactive information and/or the singing song meeting preset conditions, and playing reply multimedia contents, which are replied by the virtual object aiming at the interactive information, on the live broadcast room page.
According to one or more embodiments of the present disclosure, the present disclosure provides a live performance apparatus, including:
the song determining module is used for determining the singing song of the virtual object;
the singing video data module is used for determining the audio data of the singing song, and the action image data and the visual angle image data corresponding to the singing song to obtain singing video data;
and the data sending module is used for sending the singing video data to a terminal so that the terminal switches the picture view angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content of the virtual object based on the singing video data.
According to one or more embodiments of the present disclosure, in a live broadcasting singing apparatus provided by the present disclosure, the song determining module is specifically configured to:
and receiving the interactive information of at least one song, and determining the singing song according to the quantity of the interactive information of the at least one song.
According to one or more embodiments of the present disclosure, in a live broadcasting singing apparatus provided by the present disclosure, the apparatus further includes a singing sheet generation module configured to:
determining a plurality of songs to be sung based on the quantity of the interaction information of the at least one song;
and generating a song list based on the song information of the plurality of songs to be sung and sending the song list to the terminal so that the terminal displays the song list on a song ordering panel.
According to one or more embodiments of the present disclosure, in a live broadcasting singing apparatus provided by the present disclosure, the apparatus further includes a data matching module, configured to:
and matching corresponding action image data and view angle image data based on the attribute features of the audio data, wherein the action of the virtual object corresponding to the action image data and the picture view angle corresponding to the view angle image data are matched with the attribute features of the audio data, and the attribute features comprise at least one of rhythm, melody and duration.
According to one or more embodiments of the present disclosure, in the live broadcasting singing apparatus provided by the present disclosure, the data matching module is specifically configured to:
setting at least one action identifier and/or at least one view identifier in the playing time axis of the singing song based on the attribute characteristics of the audio data;
and respectively matching the action image data of the audio clips corresponding to the action identifications and/or matching the picture visual angles corresponding to the visual angle identifications, wherein the action image data comprise action images of the virtual object for performing at least one group of actions.
According to one or more embodiments of the present disclosure, in the live broadcasting singing apparatus provided by the present disclosure, the data matching module is specifically configured to:
determining target display information of the picture visual angle corresponding to the visual angle identification based on a corresponding relation between a picture visual angle and display information which are constructed in advance, wherein the display information comprises the display size and/or the display direction of the action image;
and adjusting the action image of the virtual object based on the target display information to obtain visual angle image data of the picture visual angle corresponding to the visual angle identification.
According to one or more embodiments of the present disclosure, in the live broadcasting singing apparatus provided by the present disclosure, the apparatus further includes a reply switching module, configured to:
receiving interactive information from a plurality of viewers;
and if the preset conditions are determined to be met based on the interaction information and/or the singing song, generating reply multimedia data based on the interaction information and sending the reply multimedia data to the terminal, so that the terminal plays reply multimedia contents, which are replied by the virtual object aiming at the interaction information, on a live broadcast room page based on the reply multimedia data.
In accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize any live singing method provided by the disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing any of the live singing methods provided by the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A method of live singing, the method comprising:
displaying a live broadcast room page of a virtual object, and playing singing video content of the virtual object on the live broadcast room page;
and in the process of playing the singing video content, the picture visual angle of the singing video content and/or the action of the virtual object are/is switched along with the change of the attribute characteristics of the singing song.
2. The method of claim 1, wherein the frame view angle represents a view angle of different lenses taking a frame of the singing video content, the lenses comprising a static lens and a dynamic lens, the static lens comprising at least one of a far lens, a near lens, a panoramic lens, a top-down lens, and a top-up lens.
3. The method of claim 1, wherein playing the singing video content of the virtual object on the live-air page comprises:
receiving singing video data of a virtual object, wherein the singing video data comprise audio data of a singing song and action image data and/or view angle image data corresponding to the singing song, and actions of the virtual object corresponding to the action image data and picture views corresponding to the view angle image data are matched with attribute characteristics of the audio data;
and generating the singing video content of the virtual object based on the singing video data of the singing song and playing the singing video content.
4. The method of claim 3, further comprising:
in response to the singing video content switching from a first screen perspective to a second screen perspective, adjusting a motion image of the virtual object based on the second screen perspective.
5. The method according to claim 3 or 4, wherein the singing song is associated with at least one view identifier and at least one action identifier, the view identifier corresponds to at least one view, the action identifier corresponds to at least one group of actions, and the view identifier and the action identifier are generated based on attribute characteristics of the singing song.
6. The method of claim 2, wherein the attribute characteristics comprise at least one of rhythm, melody, and duration, the perspective of the frame of the singing video content, and/or the motion of the virtual object switching as the attribute characteristics of the singing song change, comprising:
and responding to the rhythm change, melody change and/or duration change of the singing song, switching the singing video content from a third picture visual angle to a fourth picture visual angle, and/or switching the action of the virtual object from a first action to a second action, wherein the action of the virtual object comprises an expression action and a limb action.
7. The method of claim 1, further comprising:
a song requesting panel is displayed on the live broadcast room page, wherein the song requesting panel comprises interactive information of at least one song;
and receiving the triggering operation of a user on a target song, and updating the interactive information of the target song, wherein the target song is any song in the song-selecting panel.
8. The method of claim 7, wherein the singing song is determined based on an amount of interaction information of the at least one song, the method further comprising:
receiving a song list, wherein the song list comprises song information of a plurality of songs to be performed, and the plurality of songs to be performed are determined based on the quantity of the interactive information of the at least one song;
and displaying the song list on the song ordering panel.
9. The method of claim 1, further comprising:
displaying interactive information from a plurality of audiences on the live broadcast room page;
responding to the interactive information and/or the singing song meeting preset conditions, and playing reply multimedia contents, which are replied by the virtual object aiming at the interactive information, on the live broadcast room page.
10. A method of live singing, the method comprising:
determining a singing song of the virtual object;
determining audio data of the singing song, and action image data and visual angle image data corresponding to the singing song to obtain singing video data;
and sending the singing video data to a terminal so that the terminal switches the picture view angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content of the virtual object based on the singing video data.
11. The method of claim 10, wherein determining the song sung of the virtual object comprises:
and receiving the interactive information of at least one song, and determining the singing song according to the quantity of the interactive information of the at least one song.
12. The method of claim 11, further comprising:
determining a plurality of songs to be sung based on the quantity of the interaction information of the at least one song;
and generating a song list based on the song information of the plurality of songs to be sung and sending the song list to the terminal so that the terminal displays the song list on a song ordering panel.
13. The method of claim 10, further comprising:
and matching corresponding action image data and view angle image data based on the attribute features of the audio data, wherein the action of the virtual object corresponding to the action image data and the picture view angle corresponding to the view angle image data are matched with the attribute features of the audio data, and the attribute features comprise at least one of rhythm, melody and duration.
14. The method of claim 13, wherein matching corresponding action image data and perspective image data based on attribute features of the audio data comprises:
setting at least one action identifier and/or at least one view identifier in the playing time axis of the singing song based on the attribute characteristics of the audio data;
and respectively matching the action image data of the audio clips corresponding to the action identifications and/or matching the picture visual angles corresponding to the visual angle identifications, wherein the action image data comprise action images of the virtual object for performing at least one group of actions.
15. The method of claim 14, further comprising:
determining target display information of the picture visual angle corresponding to the visual angle identification based on a corresponding relation between a picture visual angle and display information which are constructed in advance, wherein the display information comprises the display size and/or the display direction of the action image;
and adjusting the action image of the virtual object based on the target display information to obtain visual angle image data of the picture visual angle corresponding to the visual angle identification.
16. The method of claim 10, further comprising:
receiving interactive information from a plurality of viewers;
and if the preset conditions are determined to be met based on the interaction information and/or the singing song, generating reply multimedia data based on the interaction information and sending the reply multimedia data to the terminal, so that the terminal plays reply multimedia contents, which are replied by the virtual object aiming at the interaction information, on a live broadcast room page based on the reply multimedia data.
17. An apparatus for live singing, the apparatus comprising:
the live broadcasting singing module is used for displaying a live broadcasting room page of a virtual object and playing the singing video content of the virtual object on the live broadcasting room page;
and the switching module is used for switching the picture visual angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content.
18. An apparatus for live singing, the apparatus comprising:
the song determining module is used for determining the singing song of the virtual object;
the singing video data module is used for determining the audio data of the singing song, and the action image data and the visual angle image data corresponding to the singing song to obtain singing video data;
and the data sending module is used for sending the singing video data to a terminal so that the terminal switches the picture view angle of the singing video content and/or the action of the virtual object along with the change of the attribute characteristics of the singing song in the process of playing the singing video content of the virtual object based on the singing video data.
19. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the live singing method of any one of claims 1-16.
20. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the live singing method of any one of claims 1-16.
CN202011460147.0A 2020-12-11 2020-12-11 Live broadcasting singing method, device, equipment and medium Pending CN112637622A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011460147.0A CN112637622A (en) 2020-12-11 2020-12-11 Live broadcasting singing method, device, equipment and medium
PCT/CN2021/128073 WO2022121558A1 (en) 2020-12-11 2021-11-02 Livestreaming singing method and apparatus, device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011460147.0A CN112637622A (en) 2020-12-11 2020-12-11 Live broadcasting singing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN112637622A true CN112637622A (en) 2021-04-09

Family

ID=75312334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011460147.0A Pending CN112637622A (en) 2020-12-11 2020-12-11 Live broadcasting singing method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN112637622A (en)
WO (1) WO2022121558A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205575A (en) * 2021-04-29 2021-08-03 广州繁星互娱信息科技有限公司 Display method, device, terminal and storage medium for live singing information
CN113518235A (en) * 2021-04-30 2021-10-19 广州繁星互娱信息科技有限公司 Live video data generation method and device and storage medium
CN113766340A (en) * 2021-09-27 2021-12-07 广州方硅信息技术有限公司 Dance music interaction method, system and device under live connected wheat broadcast and computer equipment
CN114120943A (en) * 2021-11-22 2022-03-01 腾讯科技(深圳)有限公司 Method, device, equipment, medium and program product for processing virtual concert
CN114155322A (en) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 Scene picture display control method and device and computer storage medium
CN114363689A (en) * 2022-01-11 2022-04-15 广州博冠信息科技有限公司 Live broadcast control method and device, storage medium and electronic equipment
WO2022121558A1 (en) * 2020-12-11 2022-06-16 北京字跳网络技术有限公司 Livestreaming singing method and apparatus, device, and medium
CN114745598A (en) * 2022-04-12 2022-07-12 北京字跳网络技术有限公司 Video data display method and device, electronic equipment and storage medium
CN114938364A (en) * 2022-05-13 2022-08-23 杭州网易云音乐科技有限公司 Audio sorting method, audio sorting device, equipment, medium and computing equipment
WO2022223029A1 (en) * 2021-04-22 2022-10-27 北京字节跳动网络技术有限公司 Avatar interaction method, apparatus, and device
WO2022271086A1 (en) * 2021-06-21 2022-12-29 Lemon Inc. Rendering virtual articles of clothing based on audio characteristics
CN115657862A (en) * 2022-12-27 2023-01-31 海马云(天津)信息技术有限公司 Method and device for automatically switching virtual KTV scene pictures, storage medium and equipment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414322A (en) * 2007-10-16 2009-04-22 盛趣信息技术(上海)有限公司 Exhibition method and system for virtual role
CN104102146A (en) * 2014-07-08 2014-10-15 苏州乐聚一堂电子科技有限公司 Virtual accompanying dance universal control system
CN104679378A (en) * 2013-11-27 2015-06-03 苏州蜗牛数字科技股份有限公司 Music media playing mode based on virtual head portrait
US20150194185A1 (en) * 2012-06-29 2015-07-09 Nokia Corporation Video remixing system
CN105308682A (en) * 2013-06-28 2016-02-03 皇家飞利浦有限公司 System, method and devices for bluetooth party-mode
CN106303732A (en) * 2016-08-01 2017-01-04 北京奇虎科技有限公司 Interactive approach based on net cast, Apparatus and system
CN106445460A (en) * 2016-10-18 2017-02-22 渡鸦科技(北京)有限责任公司 Control method and device
US20170110154A1 (en) * 2015-10-16 2017-04-20 Google Inc. Generating videos of media items associated with a user
CN107422862A (en) * 2017-08-03 2017-12-01 嗨皮乐镜(北京)科技有限公司 A kind of method that virtual image interacts in virtual reality scenario
CN110119700A (en) * 2019-04-30 2019-08-13 广州虎牙信息科技有限公司 Virtual image control method, virtual image control device and electronic equipment
CN111179385A (en) * 2019-12-31 2020-05-19 网易(杭州)网络有限公司 Dance animation processing method and device, electronic equipment and storage medium
CN111343509A (en) * 2020-02-17 2020-06-26 聚好看科技股份有限公司 Action control method of virtual image and display equipment
CN111405357A (en) * 2019-01-02 2020-07-10 阿里巴巴集团控股有限公司 Audio and video editing method and device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899302B (en) * 2015-06-10 2018-07-17 百度在线网络技术(北京)有限公司 Recommend the method and apparatus of music to user
JP2018109940A (en) * 2017-08-21 2018-07-12 株式会社コロプラ Information processing method and program for causing computer to execute the same
CN109189541A (en) * 2018-09-17 2019-01-11 福建星网视易信息系统有限公司 interface display method and computer readable storage medium
CN210112145U (en) * 2019-02-18 2020-02-21 阿里巴巴集团控股有限公司 Audio and video conference system and equipment
CN110850983B (en) * 2019-11-13 2020-11-24 腾讯科技(深圳)有限公司 Virtual object control method and device in video live broadcast and storage medium
CN112637622A (en) * 2020-12-11 2021-04-09 北京字跳网络技术有限公司 Live broadcasting singing method, device, equipment and medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414322A (en) * 2007-10-16 2009-04-22 盛趣信息技术(上海)有限公司 Exhibition method and system for virtual role
US20150194185A1 (en) * 2012-06-29 2015-07-09 Nokia Corporation Video remixing system
CN105308682A (en) * 2013-06-28 2016-02-03 皇家飞利浦有限公司 System, method and devices for bluetooth party-mode
CN104679378A (en) * 2013-11-27 2015-06-03 苏州蜗牛数字科技股份有限公司 Music media playing mode based on virtual head portrait
CN104102146A (en) * 2014-07-08 2014-10-15 苏州乐聚一堂电子科技有限公司 Virtual accompanying dance universal control system
US20170110154A1 (en) * 2015-10-16 2017-04-20 Google Inc. Generating videos of media items associated with a user
CN106303732A (en) * 2016-08-01 2017-01-04 北京奇虎科技有限公司 Interactive approach based on net cast, Apparatus and system
CN106445460A (en) * 2016-10-18 2017-02-22 渡鸦科技(北京)有限责任公司 Control method and device
CN107422862A (en) * 2017-08-03 2017-12-01 嗨皮乐镜(北京)科技有限公司 A kind of method that virtual image interacts in virtual reality scenario
CN111405357A (en) * 2019-01-02 2020-07-10 阿里巴巴集团控股有限公司 Audio and video editing method and device and storage medium
CN110119700A (en) * 2019-04-30 2019-08-13 广州虎牙信息科技有限公司 Virtual image control method, virtual image control device and electronic equipment
CN111179385A (en) * 2019-12-31 2020-05-19 网易(杭州)网络有限公司 Dance animation processing method and device, electronic equipment and storage medium
CN111343509A (en) * 2020-02-17 2020-06-26 聚好看科技股份有限公司 Action control method of virtual image and display equipment

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022121558A1 (en) * 2020-12-11 2022-06-16 北京字跳网络技术有限公司 Livestreaming singing method and apparatus, device, and medium
WO2022223029A1 (en) * 2021-04-22 2022-10-27 北京字节跳动网络技术有限公司 Avatar interaction method, apparatus, and device
CN113205575A (en) * 2021-04-29 2021-08-03 广州繁星互娱信息科技有限公司 Display method, device, terminal and storage medium for live singing information
CN113518235A (en) * 2021-04-30 2021-10-19 广州繁星互娱信息科技有限公司 Live video data generation method and device and storage medium
CN113518235B (en) * 2021-04-30 2023-11-28 广州繁星互娱信息科技有限公司 Live video data generation method, device and storage medium
US11769289B2 (en) 2021-06-21 2023-09-26 Lemon Inc. Rendering virtual articles of clothing based on audio characteristics
WO2022271086A1 (en) * 2021-06-21 2022-12-29 Lemon Inc. Rendering virtual articles of clothing based on audio characteristics
CN113766340B (en) * 2021-09-27 2023-03-31 广州方硅信息技术有限公司 Dance music interaction method, system and device under live connected wheat broadcast and computer equipment
CN113766340A (en) * 2021-09-27 2021-12-07 广州方硅信息技术有限公司 Dance music interaction method, system and device under live connected wheat broadcast and computer equipment
CN114120943A (en) * 2021-11-22 2022-03-01 腾讯科技(深圳)有限公司 Method, device, equipment, medium and program product for processing virtual concert
CN114120943B (en) * 2021-11-22 2023-07-04 腾讯科技(深圳)有限公司 Virtual concert processing method, device, equipment and storage medium
WO2023087932A1 (en) * 2021-11-22 2023-05-25 腾讯科技(深圳)有限公司 Virtual concert processing method and apparatus, and device, storage medium and program product
CN114155322A (en) * 2021-12-01 2022-03-08 北京字跳网络技术有限公司 Scene picture display control method and device and computer storage medium
CN114363689A (en) * 2022-01-11 2022-04-15 广州博冠信息科技有限公司 Live broadcast control method and device, storage medium and electronic equipment
CN114363689B (en) * 2022-01-11 2024-01-23 广州博冠信息科技有限公司 Live broadcast control method and device, storage medium and electronic equipment
CN114745598A (en) * 2022-04-12 2022-07-12 北京字跳网络技术有限公司 Video data display method and device, electronic equipment and storage medium
CN114745598B (en) * 2022-04-12 2024-03-19 北京字跳网络技术有限公司 Video data display method and device, electronic equipment and storage medium
CN114938364A (en) * 2022-05-13 2022-08-23 杭州网易云音乐科技有限公司 Audio sorting method, audio sorting device, equipment, medium and computing equipment
CN115657862A (en) * 2022-12-27 2023-01-31 海马云(天津)信息技术有限公司 Method and device for automatically switching virtual KTV scene pictures, storage medium and equipment

Also Published As

Publication number Publication date
WO2022121558A1 (en) 2022-06-16

Similar Documents

Publication Publication Date Title
CN112637622A (en) Live broadcasting singing method, device, equipment and medium
CN112616063B (en) Live broadcast interaction method, device, equipment and medium
CN108989297B (en) Information access method, client, device, terminal, server and storage medium
CN109729372B (en) Live broadcast room switching method, device, terminal, server and storage medium
CN112601100A (en) Live broadcast interaction method, device, equipment and medium
CN102595212A (en) Simulated group interaction with multimedia content
CN109493888B (en) Cartoon dubbing method and device, computer-readable storage medium and electronic equipment
CN111343476A (en) Video sharing method and device, electronic equipment and storage medium
CN111615002B (en) Video background playing control method, device and system and electronic equipment
CN111277852A (en) Dynamic reminding method, device, equipment and storage medium
CN112291590A (en) Video processing method and device
CN113518232A (en) Video display method, device, equipment and storage medium
CN114581566A (en) Animation special effect generation method, device, equipment and medium
CN115190366B (en) Information display method, device, electronic equipment and computer readable medium
CN114615513A (en) Video data generation method and device, electronic equipment and storage medium
CN114679628B (en) Bullet screen adding method and device, electronic equipment and storage medium
CN114846808B (en) Content distribution system, content distribution method, and storage medium
CN109116718A (en) The method and apparatus of alarm clock is set
CN115086729B (en) Wheat connecting display method and device, electronic equipment and computer readable medium
CN116112617A (en) Method and device for processing performance picture, electronic equipment and storage medium
CN115665435A (en) Live broadcast processing method and device and electronic equipment
CN115550723A (en) Multimedia information display method and device and electronic equipment
CN115022702A (en) Method, device, equipment, medium and product for displaying gift in live broadcast room
CN115225948A (en) Live broadcast room interaction method, device, equipment and medium
CN116800988A (en) Video generation method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210409