CN108848394A - Net cast method, apparatus, terminal and storage medium - Google Patents
Net cast method, apparatus, terminal and storage medium Download PDFInfo
- Publication number
- CN108848394A CN108848394A CN201810841297.2A CN201810841297A CN108848394A CN 108848394 A CN108848394 A CN 108848394A CN 201810841297 A CN201810841297 A CN 201810841297A CN 108848394 A CN108848394 A CN 108848394A
- Authority
- CN
- China
- Prior art keywords
- target object
- image
- audio data
- real time
- special effect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 83
- 230000000694 effects Effects 0.000 claims abstract description 109
- 230000008569 process Effects 0.000 claims description 49
- 230000001815 facial effect Effects 0.000 claims description 10
- 230000002093 peripheral effect Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000001960 triggered effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 240000007711 Peperomia pellucida Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4394—Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4756—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a kind of net cast method, apparatus, terminal and storage mediums, belong to Internet technical field.This method includes:When receiving live streaming instruction, the audio data and image of target object are acquired in real time, the audio data and image acquired in real time is sent to server, and live streaming instruction is used to indicate the audio data and image being broadcast live when the target object sings target song;During live streaming, the score of the audio data acquired in real time is determined;When the score of the audio data acquired in real time meets default score condition, instruction of giving gifts is received, which, which is used to indicate, allows to the target object gifts;When the present given meets default present condition, the special efficacy of the target object is added on the image acquired in real time, which is used to indicate the singing effect that the target object sings the target song;Image after display addition special efficacy;Image after sending from the addition special efficacy to the server.
Description
Technical Field
The invention relates to the technical field of internet, in particular to a live video broadcasting method, a live video broadcasting device, a live video broadcasting terminal and a storage medium.
Background
With the development of internet technology, a user can record a video in a video application and share the recorded video to a network platform of the video application in real time through a network, for example, a main broadcast can record the video when the main broadcast sing a song in real time through a live broadcast application and live broadcast the process of singing the song in real time.
In the related art, the video live broadcast process generally includes: after the user starts the video application, the terminal records the video of the user singing the song in real time through the video application and sends the video to the server in real time. The server sends the recorded video to a terminal where a plurality of audiences are located in real time, and meanwhile, the terminal can also perform beautifying processing on the video, for example, whitening and polishing multi-frame images in the video, or adding some hanging special effects on the positions of five sense organs of face parts in the multi-frame images. For example, a dog nose is added to the nose, and an icon such as a rabbit ear is added to the top of the head.
In the live broadcasting process, in fact, in the process of one-way recording of the terminal and one-way singing of the user, the enthusiasm of the user is not high, and in the process of live broadcasting of the video, the terminal only carries out the process of beautifying the image, so that the interest of the video live broadcasted by the method is low, and the activity of the user applying the video is also low.
Disclosure of Invention
The embodiment of the invention provides a video live broadcast method, a video live broadcast device, a video live broadcast terminal and a storage medium, which can solve the problem of low interestingness of videos recorded in the related art. The technical scheme is as follows:
in one aspect, a video live broadcast method is provided, and the method includes:
when a live broadcasting instruction is received, acquiring audio data and images of a target object in real time, and sending the audio data and images acquired in real time to a server, wherein the live broadcasting instruction is used for indicating the audio data and images of the target object when the target object sings a target song in live broadcasting;
determining the score of the audio data collected in real time in the live broadcasting process;
when the score of the real-time collected audio data meets a preset score condition, receiving a gift giving instruction, wherein the gift giving instruction is used for indicating that gifting of gifts is allowed for the target object;
when the gifted gifts meet preset gift conditions, adding a special effect of the target object to the image acquired in real time, wherein the special effect is used for indicating the singing effect of the target object singing the target song;
displaying the image with the special effect;
and sending the image added with the special effect to the server.
Optionally, in the live broadcasting process, determining the score of the audio data collected in real time includes:
in the live broadcasting process, sending the audio data collected in real time to the server, and receiving the score of the audio data sent by the server; or,
and acquiring original singing audio data of the target song, and determining the score of the audio data acquired in real time according to the original singing audio data and the audio data acquired in real time.
Optionally, when the gifted gift meets a preset gift condition, before adding the special effect of the target object to the real-time acquired image, the method further includes:
according to gifts given to the target object, counting the number of the gifted gifts and/or the virtual resource value of the gifted gifts;
and judging whether the presented gifts meet the preset gift conditions or not according to the number of the presented gifts and/or the virtual resource numerical values of the presented gifts.
Optionally, the special effect is a facial special effect of the target object, and adding the special effect of the target object to the image acquired in real time includes:
determining the position information of the face of the target object in the image according to the image acquired in real time;
and adding the face special effect of the target object to the face position of the target object in the image according to the position information.
Optionally, the determining, according to the image acquired in real time, the position information of the face of the target object in the image includes:
receiving position information of the face of the target object in the image, which is sent by the server based on the real-time acquired image; or,
and carrying out image recognition on the image acquired in real time, recognizing the face of the target object in the image, and acquiring the position information of the face in the image.
Optionally, the adding the face special effect to the image based on the position information includes:
acquiring an avatar icon of an original singer of the target song according to the identifier of the target song;
adding an avatar icon of the original singer at a face position in the image based on the position information.
In one aspect, a video live broadcasting device is provided, the device including:
the system comprises a collecting module, a server and a live broadcasting module, wherein the collecting module is used for collecting audio data and images of a target object in real time when a live broadcasting instruction is received, and sending the audio data and the images which are collected in real time to the server, and the live broadcasting instruction is used for indicating the audio data and the images when the target object sings a target song in live broadcasting;
the determining module is used for determining the score of the audio data acquired in real time in the live broadcasting process;
the receiving module is used for receiving a gift giving instruction when the score of the real-time collected audio data meets a preset score condition, and the gift giving instruction is used for indicating that gifting of gifts to the target object is allowed;
the adding module is used for adding a special effect of the target object to the real-time collected image when the given gift meets a preset gift condition, wherein the special effect is used for indicating the singing effect of the target object singing the target song;
the display module is used for displaying the image added with the special effect;
and the sending module is used for sending the image added with the special effect to the server.
Optionally, the determining module is configured to send the audio data collected in real time to the server in a live broadcast process, and receive a score of the audio data sent by the server; or, acquiring original singing audio data of the target song, and determining the score of the audio data acquired in real time according to the original singing audio data and the audio data acquired in real time.
Optionally, the apparatus further comprises:
the statistic module is used for counting the number of the presented gifts and/or the virtual resource value of the presented gifts according to the gifts presented to the target object;
the determining module is further configured to determine whether the presented gifts meet the preset gift conditions according to the number of the presented gifts and/or the virtual resource value of the presented gifts.
Optionally, the special effect is a facial special effect of the target object, and the adding module includes:
the determining unit is used for determining the position information of the face of the target object in the image according to the image acquired in real time;
and the adding unit is used for adding the face special effect of the target object at the face position of the target object in the image according to the position information.
Optionally, the determining unit is configured to receive position information of the face of the target object in the image, which is sent by the server based on the image acquired in real time; or, carrying out image recognition on the image acquired in real time, recognizing the face of the target object in the image, and acquiring the position information of the face in the image.
Optionally, the face special effect is an avatar icon of an original singer of the target song,
the adding unit is used for acquiring an avatar icon of an original singer of the target song according to the identification of the target song; adding an avatar icon of the original singer at a face position in the image based on the position information.
In one aspect, a terminal is provided, where the terminal includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement operations performed by the video live broadcast method.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operations performed by the video live broadcast method as described above.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when a live broadcast instruction is received, a terminal acquires audio data and images of a target object in real time and sends the audio data and the images acquired in real time to a server; in the live broadcast process, the terminal determines the score of the audio data collected in real time, and when the score of the audio data collected in real time meets a preset score condition, the terminal receives a gift sending instruction; when the presented gifts meet preset gifts, adding a special effect of the target object to the real-time collected images by the terminal, wherein the special effect is used for indicating the singing effect of the target object singing the target song; and the terminal displays the image added with the special effect and sends the image added with the special effect to the server. In the live broadcasting process, the terminal can trigger a gift sending instruction based on the audio data of the target object during singing, and add a special effect to the image based on the presented gift, so that the interestingness of the live broadcasting process is increased, the live broadcasting content is enriched, the enthusiasm of the main broadcasting singing song is improved, the enthusiasm of the user for presenting the gift in the live broadcasting process is promoted, and the liveness of the user in video application is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a video live broadcast method according to an embodiment of the present invention;
fig. 2 is a flowchart of a video live broadcast method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video live broadcasting device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a video live broadcast method according to an embodiment of the present invention. The execution subject of the embodiment of the invention is a terminal, and referring to fig. 1, the method comprises the following steps:
101. when a live broadcasting instruction is received, acquiring audio data and images of a target object in real time, and sending the audio data and images acquired in real time to a server, wherein the live broadcasting instruction is used for indicating the audio data and images of the target object when the target object sing a target song in live broadcasting;
102. determining the score of the audio data collected in real time in the live broadcasting process;
103. when the score of the real-time collected audio data meets a preset score condition, receiving a gift giving instruction, wherein the gift giving instruction is used for indicating that gifting of a gift to the target object is allowed;
104. when the gifted gifts meet preset gift conditions, adding a special effect of the target object to the image acquired in real time, wherein the special effect is used for indicating the singing effect of the target object singing the target song;
105. displaying the image with the special effect;
106. and sending the image added with the special effect to the server.
Optionally, in the live broadcasting process, determining the score of the audio data collected in real time includes:
receiving scores of the audio number sent by the server based on the real-time collected audio data in the live broadcasting process; or,
and acquiring original singing audio data of the target song, and determining the score of the audio data acquired in real time according to the original singing audio data and the audio data acquired in real time.
Optionally, when the present gift meets a preset gift condition, before adding the special effect of the target object to the real-time acquired image, the method further includes:
according to gifts given to the target object, counting the number of the gifted gifts and/or the virtual resource value of the gifted gifts;
and determining whether the presented gifts meet preset gift conditions according to the number of the presented gifts and/or the virtual resource value of the presented gifts.
Optionally, the special effect is a facial special effect of the target object, and adding the special effect of the target object to the image acquired in real time includes:
determining the position information of the face of the target object in the image according to the image acquired in real time;
adding the face special effect of the target object to the face position of the target object in the image according to the position information.
Optionally, the determining the position information of the face of the target object in the image according to the image acquired in real time includes:
receiving the position information of the face of the target object in the image, which is sent by the server based on the image acquired in real time; or,
and carrying out image recognition on the image acquired in real time, recognizing the face of the target object in the image, and acquiring the position information of the face in the image.
Optionally, the adding the face special effect of the target object to the face position of the target object in the image according to the position information includes:
acquiring an avatar icon of an original singer of the target song according to the identifier of the target song;
based on the position information, an avatar icon of the original singer is added to the face position in the image.
In the embodiment of the invention, when a live broadcast instruction is received, a terminal acquires audio data and images of a target object in real time and sends the audio data and the images acquired in real time to a server; in the live broadcast process, the terminal determines the score of the audio data collected in real time, and when the score of the audio data collected in real time meets a preset score condition, the terminal receives a gift sending instruction; when the presented gifts meet preset gifts, adding a special effect of the target object to the real-time collected images by the terminal, wherein the special effect is used for indicating the singing effect of the target object singing the target song; and the terminal displays the image added with the special effect and sends the image added with the special effect to the server. In the live broadcasting process, the terminal can trigger a gift sending instruction based on the audio data of the target object during singing, and add a special effect to the image based on the presented gift, so that the interestingness of the live broadcasting process is increased, the live broadcasting content is enriched, the enthusiasm of the main broadcasting singing song is improved, the enthusiasm of the user for presenting the gift in the live broadcasting process is promoted, and the liveness of the user in video application is further improved.
Fig. 2 is a flowchart of a video live broadcast method according to an embodiment of the present invention. The execution subject of the embodiment of the present invention is a terminal, and referring to fig. 2, the method includes:
201. when receiving a live broadcasting instruction, the terminal collects the audio data and images of the target object in real time and sends the audio data and images collected in real time to the server.
The live broadcasting instruction is used for indicating audio data and images when the target object sing the target song is live broadcasting.
In this step, when the terminal receives the live broadcasting instruction, the terminal starts the camera and the audio input device to start to collect the image and audio data of the target object in real time, and sends the collected image and audio data of the target object to the server in real time, so that the server sends the image and audio data of the target object to the terminals of a plurality of users watching the target object. The user can trigger a live instruction of the terminal in the video application: when the video application is started, the terminal may display a live button on the current interface. And when the terminal detects that the live button is triggered, the terminal receives the live command.
Further, when the terminal receives the live broadcasting instruction, the terminal acquires the accompaniment audio of the target song and plays the accompaniment audio of the target song, so that the target object can sing the target song along with the accompaniment audio.
It should be noted that a video application may be installed on the terminal, and the terminal performs live video in the video application and performs data interaction with the server based on the video application. The video application can be a live broadcast application with a video live broadcast function, a short video application or a social application and the like. The server may be a background server for the video application. In the embodiment of the invention, the terminal can send the audio data and the image recorded in real time to the server, and the server pushes the audio data and the image to a plurality of users watching the target object to sing.
202. In the live broadcasting process, the terminal determines the score of the audio data collected in real time.
When the terminal collects the audio data, the terminal can analyze the audio data collected in real time to determine the score of the audio data. The terminal can store original singing audio data of the target song in advance, and according to the original singing audio data and the audio data of the target object collected in real time, the audio data collected in real time are scored according to a preset scoring rule.
The preset scoring rule may be set based on needs, which is not specifically limited in the embodiment of the present invention. The terminal can score the audio data corresponding to each lyric by taking one lyric as a unit. The preset scoring rule may be: the higher the similarity between the audio data corresponding to each lyric collected in real time and the audio data corresponding to each lyric in the original singing, the higher the score of the audio data collected in real time.
In a possible implementation, the audio data may also be scored by the server, and the terminal obtains the score of the audio data collected in real time from the server. The process may be: in the live broadcast process, when the server receives real-time collected audio data sent by the terminal, the server scores the real-time collected audio data according to a preset scoring rule based on the real-time collected audio data and sends scores of the real-time collected audio data to the terminal, and the terminal receives the scores of the audio data sent by the server.
Further, in the live broadcasting process, the terminal can further judge and determine whether the score of the audio data collected in real time meets a preset score condition based on the score of the audio data collected in real time. The process may be: and the terminal acquires a preset score condition corresponding to the target song from the server, and judges whether the score of the audio data acquired in real time meets the preset score condition or not according to the score of the audio data acquired in real time.
The preset score condition may be that the score of the audio data is not less than a preset threshold, or the scores of at least a preset number of audio segments in a plurality of audio segments sung by the target object are not less than the preset threshold, and one audio segment may be the audio data corresponding to one lyric sung by the target object. Alternatively, the preset score condition may also be set based on needs, and this is not specifically limited in the embodiment of the present invention.
The terminal can determine the score of each audio segment sung by the target object according to the audio data collected in real time, count the scores of the multiple audio segments in real time, and determine that the score of the audio data collected in real time meets a preset score condition when the scores of the preset number of audio segments in the multiple audio segments are not less than the preset threshold value.
The terminal can acquire audio data of the target object during singing through the audio input device, when the target object performs singing along with the accompanying audio, the terminal can also acquire the audio data of the accompanying audio, at the moment, when the terminal acquires the audio data acquired by the audio input device, the terminal extracts the audio data of a voice frequency band from the audio data according to the frequency band where the voice is located, so that the follow-up judgment and special effect addition are performed only on the basis of the audio data of the voice frequency band, and the accuracy of generating the video file is further improved.
In a possible implementation manner, the terminal may first determine whether the audio data of the target song sung by the target object is true according to the audio data collected by the audio input device, and when the audio data of the target song sung by the target object is true, the terminal performs the step of determining the score of the collected audio data. The real singing means that the target object sings by using the voice of the target object on site, and when the target object plays the target song sung by other people through the playing equipment, for example, when the original singing is played, the target object is not the real singing. Specifically, the terminal extracts voice data of a voice frequency range from the voice data collected by the voice input device according to the frequency range where the voice is located, and when the extracted voice data of the voice frequency range meets a real singing condition, the terminal determines that the target object is real singing. Otherwise, the target object is determined not to be a true song. The real singing condition may be set based on a need, which is not specifically limited in the embodiment of the present invention, for example, the real singing condition may be that the duration corresponding to the extracted audio data of the vocal range reaches a preset duration, or the pitch corresponding to the audio data of the vocal range reaches a preset pitch, or the like. The preset duration or the preset pitch may be set based on a duration or a pitch corresponding to the original singing of the target song in the audio data segment, which is not specifically limited in the embodiment of the present invention.
203. And when the score of the audio data collected in real time meets a preset score condition, the terminal receives a gift sending instruction.
Wherein the gift giving instruction is used for indicating that the gift is allowed to be given to the target object; and when the score of the audio data collected in real time meets a preset score condition, triggering the terminal to receive a gift sending instruction, wherein the terminal can send the gift sending instruction to the server, the server sends the gift sending instruction to terminals of a plurality of users watching the singing of the target object, and the terminals of the plurality of users receive the gift sending instruction and prompt the plurality of users to give gifts to the target object.
It should be noted that, when the terminal determines that the score of the target object during singing meets the preset score condition, it indicates that the target object is true singing and the singing effect is better, and the terminal can trigger the user watching the target object to start presenting the gift. And the terminal can instruct other users to give gifts based on the singing effect of the target object, so that when the target object sings in a fake way or the singing effect is not good, the user watching the target object does not give gifts, and the authenticity and fairness in the live broadcasting process are ensured.
204. The terminal judges whether the presented gift meets a preset gift condition or not based on the gift presented to the target object, and when the presented gift meets the preset gift condition, the terminal adds the special effect of the target object to the real-time collected image.
In an embodiment of the present invention, the preset gift condition may be: the amount of the presented gifts is accumulated to reach a preset threshold value, or the virtual value of the presented gifts is accumulated to reach a preset virtual value. The terminal determines whether the presented gifts satisfy a preset gift condition based on the number of presented gifts and/or the virtual value of the presented gifts. The virtual value refers to a value of a virtual resource spent by the other users viewing the target object when giving the gift.
When the terminal makes a judgment based on the number of gifts presented, the process may be: the terminal counts the number of the gifts given according to the gifts given to the target object, and when the number of the gifts reaches a preset threshold value, the given gifts are determined to meet preset gift conditions.
When the terminal makes a judgment based on the virtual value of the presented gift, the process may be: the terminal can acquire the virtual value of the presented gift, count the virtual value of the presented gift, and determine that the presented gift meets the preset gift condition when the virtual value of the presented gift is accumulated to the preset virtual value.
Of course, the terminal may also determine whether the presented gift satisfies the preset gift condition by combining the presented gift number and the virtual value. In one possible embodiment, when the virtual value of the presented gifts is accumulated to a preset virtual value and the number of the gifts reaches a preset threshold value, the terminal determines that the presented gifts satisfy a preset gift condition. Of course, the terminal may also set respective weights corresponding to the virtual values and the number, and perform comprehensive judgment based on the weights, the number, and the number of the virtual values and the weights of the number. The embodiment of the present invention is not particularly limited to this.
In the embodiment of the present invention, the special effect is used to indicate a singing effect of the target object singing the target song.
In the embodiment of the invention, when the presented gift meets the preset gift condition, the terminal acquires the special effect of the target object from the server and adds the special effect to the image acquired in real time according to the display position of the special effect.
In a possible implementation manner, the special effect is a special effect of the face of the target object, and accordingly, this step may be: the terminal determines the position information of the face of the target object in the image based on the image acquired in real time; and the terminal adds a face special effect corresponding to the target object on the image based on the position information.
It should be noted that the terminal may determine the position information by itself, or may determine the position information by the server, and accordingly, the process of the terminal determining the position information of the face of the target object in the image may be: the terminal receives the position information of the face of the target object in the image, which is sent by the server based on the image collected in real time; or the terminal carries out image recognition on the image acquired in real time, recognizes the face of the target object in the image and acquires the position information of the face in the image. Specifically, the process of identifying the image by the terminal or the server may be: the server identifies the image based on the real-time collected image sent by the terminal through a preset identification algorithm, identifies the face in the image, obtains the position information of the face in the image, sends the position information of the face to the terminal, and the terminal receives the position information; or, the terminal identifies the image through a preset identification algorithm based on the acquired image of the target object, identifies the face in the image, and obtains the position information of the face in the image.
In a possible implementation manner, the face special effect may be an avatar icon of an original singer of the target song, and correspondingly, the step of adding, by the terminal, the face special effect corresponding to the target object on the image based on the position information may be: the terminal acquires the head portrait icon of the original singer of the target song according to the identification of the target song; the terminal adds an avatar icon of the original singer to the face position in the image based on the position information.
205. And the terminal displays the image added with the special effect.
The terminal can display the image with the special effect on the display interface.
206. And the terminal sends the image with the special effect to the server.
The terminal sends the image after the special effect is added to the server in real time so as to synchronize the display effect of the terminal to the terminals of a plurality of other users watching the target object, and of course, the terminal also sends the audio data collected in real time to the server.
In addition, when the live broadcast is finished, the terminal can also generate a video file for the target object. Specifically, when the terminal receives a live broadcast ending instruction, the terminal adds a special effect displayed in the live broadcast process to a corresponding image according to a multi-frame image collected in real time, and generates the video file by using audio data collected in real time and the multi-frame image added with the special effect.
The live broadcast ending instruction may be triggered by a target object, for example, the target object is triggered by a record ending button, or by a specified voice instruction, and the like. In addition, the live broadcast ending instruction can be triggered by the terminal, for example, the terminal generates the live broadcast ending instruction based on the interactive duration trigger.
The step of receiving the live broadcast ending instruction by the terminal may be: when the terminal detects that a recording end button is triggered; or, when the terminal detects a specified voice; or, in the process of timing the live broadcast time length by the terminal, when the current timing reaches the preset live broadcast time length; the terminal receives the live broadcast ending instruction.
In the embodiment of the invention, when a live broadcast instruction is received, a terminal acquires audio data and images of a target object in real time and sends the audio data and the images acquired in real time to a server; in the live broadcast process, the terminal determines the score of the audio data collected in real time, and when the score of the audio data collected in real time meets a preset score condition, the terminal receives a gift sending instruction; when the presented gifts meet preset gifts, adding a special effect of the target object to the real-time collected images by the terminal, wherein the special effect is used for indicating the singing effect of the target object singing the target song; and the terminal displays the image added with the special effect and sends the image added with the special effect to the server. In the live broadcasting process, the terminal can trigger a gift sending instruction based on the audio data of the target object during singing, and add a special effect to the image based on the presented gift, so that the interestingness of the live broadcasting process is increased, the live broadcasting content is enriched, the enthusiasm of the main broadcasting singing song is improved, the enthusiasm of the user for presenting the gift in the live broadcasting process is promoted, and the liveness of the user in video application is further improved.
Fig. 3 is a schematic structural diagram of a video live broadcasting device according to an embodiment of the present invention. Referring to fig. 3, the apparatus includes: an acquisition module 301, a determination module 302, a receiving module 303, an adding module 304, a display module 305, and a sending module 306.
The acquisition module 301 is configured to acquire audio data and images of a target object in real time when a live broadcast instruction is received, and send the acquired audio data and images to a server, where the live broadcast instruction is used to indicate that the audio data and images of the target object are live broadcast when a target song is sung;
a determining module 302, configured to determine a score of the audio data collected in real time in a live broadcast process;
a receiving module 303, configured to receive a gift-giving instruction when the score of the real-time acquired audio data meets a preset score condition, where the gift-giving instruction is used to instruct that a gift is allowed to be given to the target object;
an adding module 304, configured to add a special effect of the target object to the image acquired in real time when the presented gift meets a preset gift condition, where the special effect is used to indicate a singing effect of the target object singing the target song;
a display module 305, configured to display the image with the special effect added;
a sending module 306, configured to send the image to which the special effect is added to the server.
Optionally, the determining module is configured to receive, in a live broadcast process, a score of the number of audios sent by the server based on the audio data collected in real time; or, obtaining original singing audio data of the target song, and determining the score of the audio data collected in real time according to the original singing audio data and the audio data collected in real time.
Optionally, the apparatus further comprises:
the statistic module is used for counting the number of the gifted gifts and/or the virtual resource value of the gifted gifts according to the gifted gifts for the target object;
the determining module is further configured to determine whether the presented gifts meet preset gift conditions according to the number of the presented gifts and/or the virtual resource value of the presented gifts.
Optionally, the special effect is a facial special effect of the target object, and the adding module includes:
the determining unit is used for determining the position information of the face of the target object in the image according to the image acquired in real time;
and the adding unit is used for adding the face special effect of the target object in the face position of the target object in the image according to the position information.
Optionally, the determining unit is configured to receive position information of the face of the target object in the image, which is sent by the server based on the image acquired in real time; or, carrying out image recognition on the image acquired in real time, recognizing the face of the target object in the image, and acquiring the position information of the face in the image.
Optionally, the face special effect is an avatar icon of an original singer of the target song,
the adding unit is used for acquiring the head portrait icon of the original singer of the target song according to the identification of the target song; based on the position information, an avatar icon of the original singer is added to the face position in the image.
In the embodiment of the invention, when a live broadcast instruction is received, a terminal acquires audio data and images of a target object in real time and sends the audio data and the images acquired in real time to a server; in the live broadcast process, the terminal determines the score of the audio data collected in real time, and when the score of the audio data collected in real time meets a preset score condition, the terminal receives a gift sending instruction; when the presented gifts meet preset gifts, adding a special effect of the target object to the real-time collected images by the terminal, wherein the special effect is used for indicating the singing effect of the target object singing the target song; and the terminal displays the image added with the special effect and sends the image added with the special effect to the server. In the live broadcasting process, the terminal can trigger a gift sending instruction based on the audio data of the target object during singing, and add a special effect to the image based on the presented gift, so that the interestingness of the live broadcasting process is increased, the live broadcasting content is enriched, the enthusiasm of the main broadcasting singing song is improved, the enthusiasm of the user for presenting the gift in the live broadcasting process is promoted, and the liveness of the user in video application is further improved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the video live broadcasting device provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the video live broadcast device and the video live broadcast method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
Fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention. The terminal 400 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the video live method provided by method embodiments herein.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the terminal 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic position of the terminal 400 for navigation or LBS (location based Service). The positioning component 408 may be a positioning component based on the GPS (global positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or a lower layer of the touch display screen 405. When the pressure sensor 413 is disposed on the side frame of the terminal 400, a user's holding signal to the terminal 400 can be detected, and the processor 401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually becomes larger, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is not intended to be limiting of terminal 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor in a terminal to perform the video live method in the above embodiments. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (14)
1. A method for live video, the method comprising:
when a live broadcasting instruction is received, acquiring audio data and images of a target object in real time, and sending the audio data and images acquired in real time to a server, wherein the live broadcasting instruction is used for indicating the audio data and images of the target object when the target object sings a target song in live broadcasting;
determining the score of the audio data collected in real time in the live broadcasting process;
when the score of the real-time collected audio data meets a preset score condition, receiving a gift giving instruction, wherein the gift giving instruction is used for indicating that gifting of gifts is allowed for the target object;
when the gifted gifts meet preset gift conditions, adding a special effect of the target object to the image acquired in real time, wherein the special effect is used for indicating the singing effect of the target object singing the target song;
displaying the image with the special effect;
and sending the image added with the special effect to the server.
2. The method of claim 1, wherein determining the score of the live audio data during the live broadcast comprises:
receiving scores of the audio data sent by the server based on the audio data collected in real time in the live broadcasting process; or,
and acquiring original singing audio data of the target song, and determining the score of the audio data acquired in real time according to the original singing audio data and the audio data acquired in real time.
3. The method of claim 1, wherein when the gifted gift satisfies a preset gift condition, before adding the special effect of the target object on the real-time acquired image, the method further comprises:
according to gifts given to the target object, counting the number of the gifted gifts and/or the virtual resource value of the gifted gifts;
and judging whether the presented gifts meet the preset gift conditions or not according to the number of the presented gifts and/or the virtual resource numerical values of the presented gifts.
4. The method of claim 1, wherein the special effect is a facial special effect of the target object, and wherein adding the special effect of the target object to the real-time captured image comprises:
determining the position information of the face of the target object in the image according to the image acquired in real time;
and adding the face special effect of the target object to the face position of the target object in the image according to the position information.
5. The method of claim 4, wherein determining the position information of the target object's face in the image from the real-time acquired image comprises:
receiving position information of the face of the target object in the image, which is sent by the server based on the real-time acquired image; or,
and carrying out image recognition on the image acquired in real time, recognizing the face of the target object in the image, and acquiring the position information of the face in the image.
6. The method of claim 4, wherein the facial special effect is an avatar icon of an original singer of the target song, and wherein adding the facial special effect of the target object to the facial position of the target object in the image according to the position information comprises:
acquiring an avatar icon of an original singer of the target song according to the identifier of the target song;
adding an avatar icon of the original singer at a face position in the image based on the position information.
7. A video live broadcasting apparatus, characterized in that the apparatus comprises:
the system comprises a collecting module, a server and a live broadcasting module, wherein the collecting module is used for collecting audio data and images of a target object in real time when a live broadcasting instruction is received, and sending the audio data and the images which are collected in real time to the server, and the live broadcasting instruction is used for indicating the audio data and the images when the target object sings a target song in live broadcasting;
the determining module is used for determining the score of the audio data acquired in real time in the live broadcasting process;
the receiving module is used for receiving a gift giving instruction when the score of the real-time collected audio data meets a preset score condition, and the gift giving instruction is used for indicating that gifting of gifts to the target object is allowed;
the adding module is used for adding a special effect of the target object to the image collected in real time when the given gift meets a preset gift condition, wherein the special effect is used for indicating the singing effect of the target object singing the target song;
the display module is used for displaying the image added with the special effect;
and the sending module is used for sending the image added with the special effect to the server.
8. The apparatus of claim 7,
the determining module is used for receiving the score of the audio frequency number sent by the server based on the audio data collected in real time in the live broadcasting process; or, acquiring original singing audio data of the target song, and determining the score of the audio data acquired in real time according to the original singing audio data and the audio data acquired in real time.
9. The apparatus of claim 7, further comprising:
the statistic module is used for counting the number of the presented gifts and/or the virtual resource value of the presented gifts according to the gifts presented to the target object;
the determining module is further configured to determine whether the presented gifts meet the preset gift conditions according to the number of the presented gifts and/or the virtual resource value of the presented gifts.
10. The apparatus of claim 7, wherein the effect is a facial effect of the target object, and wherein the adding module comprises:
the determining unit is used for determining the position information of the face of the target object in the image according to the image acquired in real time;
and the adding unit is used for adding the face special effect of the target object at the face position of the target object in the image according to the position information.
11. The apparatus of claim 10,
the determining unit is used for receiving the position information of the face of the target object in the image, which is sent by the server based on the real-time acquired image; or, carrying out image recognition on the image acquired in real time, recognizing the face of the target object in the image, and acquiring the position information of the face in the image.
12. The apparatus of claim 10, wherein the facial special effect is an avatar icon of an original singer of the target song,
the adding unit is used for acquiring an avatar icon of an original singer of the target song according to the identification of the target song; adding an avatar icon of the original singer at a face position in the image based on the position information.
13. A terminal, characterized in that the terminal comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the video live method according to any one of claims 1 to 6.
14. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a live video method as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810841297.2A CN108848394A (en) | 2018-07-27 | 2018-07-27 | Net cast method, apparatus, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810841297.2A CN108848394A (en) | 2018-07-27 | 2018-07-27 | Net cast method, apparatus, terminal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108848394A true CN108848394A (en) | 2018-11-20 |
Family
ID=64195140
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810841297.2A Pending CN108848394A (en) | 2018-07-27 | 2018-07-27 | Net cast method, apparatus, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108848394A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109327709A (en) * | 2018-11-23 | 2019-02-12 | 网易(杭州)网络有限公司 | Stage property put-on method and device, computer storage medium, electronic equipment |
CN109348249A (en) * | 2018-12-06 | 2019-02-15 | 广州酷狗计算机科技有限公司 | Determine that the user of number album obtains the method, apparatus and storage medium of quantity |
CN109889858A (en) * | 2019-02-15 | 2019-06-14 | 广州酷狗计算机科技有限公司 | Information processing method, device and the computer readable storage medium of virtual objects |
CN110611825A (en) * | 2019-08-22 | 2019-12-24 | 广州华多网络科技有限公司 | Gift target value setting method, live broadcast system, server and storage medium |
CN110913264A (en) * | 2019-11-29 | 2020-03-24 | 北京达佳互联信息技术有限公司 | Live data processing method and device, electronic equipment and storage medium |
CN111182355A (en) * | 2020-01-06 | 2020-05-19 | 腾讯科技(深圳)有限公司 | Interaction method, special effect display method and related device |
CN111212314A (en) * | 2020-01-17 | 2020-05-29 | 广州华多网络科技有限公司 | Method and device for displaying special effect of virtual gift and electronic equipment |
CN113473170A (en) * | 2021-07-16 | 2021-10-01 | 广州繁星互娱信息科技有限公司 | Live broadcast audio processing method and device, computer equipment and medium |
CN114503598A (en) * | 2019-12-19 | 2022-05-13 | 多玩国株式会社 | Management server, user terminal, gift system, and information processing method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150350733A1 (en) * | 2014-06-02 | 2015-12-03 | Grid News Bureau, LLC | Systems and methods for opinion sharing related to live events |
CN106059904A (en) * | 2016-07-14 | 2016-10-26 | 中青冠岳科技(北京)有限公司 | Method and system for scoring live broadcasting song based on instant communication software |
CN106210855A (en) * | 2016-07-11 | 2016-12-07 | 网易(杭州)网络有限公司 | Object displaying method and device |
CN106341720A (en) * | 2016-08-18 | 2017-01-18 | 北京奇虎科技有限公司 | Method for adding face effects in live video and device thereof |
CN107222755A (en) * | 2017-06-27 | 2017-09-29 | 北京小米移动软件有限公司 | Program dissemination method, apparatus and system |
CN107682729A (en) * | 2017-09-08 | 2018-02-09 | 广州华多网络科技有限公司 | It is a kind of based on live interactive approach and live broadcast system, electronic equipment |
CN108010541A (en) * | 2017-12-14 | 2018-05-08 | 广州酷狗计算机科技有限公司 | Method and device, the storage medium of pitch information are shown in direct broadcasting room |
-
2018
- 2018-07-27 CN CN201810841297.2A patent/CN108848394A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150350733A1 (en) * | 2014-06-02 | 2015-12-03 | Grid News Bureau, LLC | Systems and methods for opinion sharing related to live events |
CN106210855A (en) * | 2016-07-11 | 2016-12-07 | 网易(杭州)网络有限公司 | Object displaying method and device |
CN106059904A (en) * | 2016-07-14 | 2016-10-26 | 中青冠岳科技(北京)有限公司 | Method and system for scoring live broadcasting song based on instant communication software |
CN106341720A (en) * | 2016-08-18 | 2017-01-18 | 北京奇虎科技有限公司 | Method for adding face effects in live video and device thereof |
CN107222755A (en) * | 2017-06-27 | 2017-09-29 | 北京小米移动软件有限公司 | Program dissemination method, apparatus and system |
CN107682729A (en) * | 2017-09-08 | 2018-02-09 | 广州华多网络科技有限公司 | It is a kind of based on live interactive approach and live broadcast system, electronic equipment |
CN108010541A (en) * | 2017-12-14 | 2018-05-08 | 广州酷狗计算机科技有限公司 | Method and device, the storage medium of pitch information are shown in direct broadcasting room |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109327709B (en) * | 2018-11-23 | 2021-02-12 | 网易(杭州)网络有限公司 | Prop delivery method and device, computer storage medium and electronic equipment |
CN109327709A (en) * | 2018-11-23 | 2019-02-12 | 网易(杭州)网络有限公司 | Stage property put-on method and device, computer storage medium, electronic equipment |
CN109348249A (en) * | 2018-12-06 | 2019-02-15 | 广州酷狗计算机科技有限公司 | Determine that the user of number album obtains the method, apparatus and storage medium of quantity |
CN109889858B (en) * | 2019-02-15 | 2021-06-11 | 广州酷狗计算机科技有限公司 | Information processing method and device for virtual article and computer readable storage medium |
CN109889858A (en) * | 2019-02-15 | 2019-06-14 | 广州酷狗计算机科技有限公司 | Information processing method, device and the computer readable storage medium of virtual objects |
CN110611825A (en) * | 2019-08-22 | 2019-12-24 | 广州华多网络科技有限公司 | Gift target value setting method, live broadcast system, server and storage medium |
CN110913264A (en) * | 2019-11-29 | 2020-03-24 | 北京达佳互联信息技术有限公司 | Live data processing method and device, electronic equipment and storage medium |
CN110913264B (en) * | 2019-11-29 | 2020-10-20 | 北京达佳互联信息技术有限公司 | Live broadcast data processing method and device, electronic equipment and storage medium |
CN114503598B (en) * | 2019-12-19 | 2024-01-16 | 多玩国株式会社 | Management server, user terminal, gift system, and information processing method |
CN114503598A (en) * | 2019-12-19 | 2022-05-13 | 多玩国株式会社 | Management server, user terminal, gift system, and information processing method |
CN111182355B (en) * | 2020-01-06 | 2021-05-04 | 腾讯科技(深圳)有限公司 | Interaction method, special effect display method and related device |
CN111182355A (en) * | 2020-01-06 | 2020-05-19 | 腾讯科技(深圳)有限公司 | Interaction method, special effect display method and related device |
CN111212314A (en) * | 2020-01-17 | 2020-05-29 | 广州华多网络科技有限公司 | Method and device for displaying special effect of virtual gift and electronic equipment |
CN113473170A (en) * | 2021-07-16 | 2021-10-01 | 广州繁星互娱信息科技有限公司 | Live broadcast audio processing method and device, computer equipment and medium |
CN113473170B (en) * | 2021-07-16 | 2023-08-25 | 广州繁星互娱信息科技有限公司 | Live audio processing method, device, computer equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112911182B (en) | Game interaction method, device, terminal and storage medium | |
CN107967706B (en) | Multimedia data processing method and device and computer readable storage medium | |
CN109246452B (en) | Virtual gift display method and device | |
CN110267067B (en) | Live broadcast room recommendation method, device, equipment and storage medium | |
CN108401124B (en) | Video recording method and device | |
CN108848394A (en) | Net cast method, apparatus, terminal and storage medium | |
CN108008930B (en) | Method and device for determining K song score | |
WO2019114514A1 (en) | Method and apparatus for displaying pitch information in live broadcast room, and storage medium | |
CN108965757B (en) | Video recording method, device, terminal and storage medium | |
CN110688082B (en) | Method, device, equipment and storage medium for determining adjustment proportion information of volume | |
CN110956971B (en) | Audio processing method, device, terminal and storage medium | |
CN110290392B (en) | Live broadcast information display method, device, equipment and storage medium | |
CN110533585B (en) | Image face changing method, device, system, equipment and storage medium | |
CN108897597B (en) | Method and device for guiding configuration of live broadcast template | |
CN111083516A (en) | Live broadcast processing method and device | |
CN111402844B (en) | Song chorus method, device and system | |
CN109743461B (en) | Audio data processing method, device, terminal and storage medium | |
CN112541959A (en) | Virtual object display method, device, equipment and medium | |
CN111083526A (en) | Video transition method and device, computer equipment and storage medium | |
CN111083513B (en) | Live broadcast picture processing method and device, terminal and computer readable storage medium | |
CN110808021B (en) | Audio playing method, device, terminal and storage medium | |
CN111081277B (en) | Audio evaluation method, device, equipment and storage medium | |
CN111092991A (en) | Lyric display method and device and computer storage medium | |
CN112511889B (en) | Video playing method, device, terminal and storage medium | |
CN111131867B (en) | Song singing method, device, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181120 |
|
RJ01 | Rejection of invention patent application after publication |