CN111131867A - Song singing method, device, terminal and storage medium - Google Patents

Song singing method, device, terminal and storage medium Download PDF

Info

Publication number
CN111131867A
CN111131867A CN201911397437.2A CN201911397437A CN111131867A CN 111131867 A CN111131867 A CN 111131867A CN 201911397437 A CN201911397437 A CN 201911397437A CN 111131867 A CN111131867 A CN 111131867A
Authority
CN
China
Prior art keywords
singing
song
receiving
reward
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911397437.2A
Other languages
Chinese (zh)
Other versions
CN111131867B (en
Inventor
邓一雷
唐劲
苏裕贤
江倩雯
黄湘宇
阮陈贵
苏卓斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201911397437.2A priority Critical patent/CN111131867B/en
Publication of CN111131867A publication Critical patent/CN111131867A/en
Application granted granted Critical
Publication of CN111131867B publication Critical patent/CN111131867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4751End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for defining user accounts, e.g. accounts for children
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8106Monomedia components thereof involving special audio data, e.g. different tracks for different languages
    • H04N21/8113Monomedia components thereof involving special audio data, e.g. different tracks for different languages comprising music, e.g. song in MP3 format

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

The application discloses a song singing method, a song singing device, a song singing terminal and a song singing storage medium, and belongs to the technical field of multimedia. The method comprises the following steps: when a target song-based leading interface receives a leading instruction, playing the accompaniment audio of a first song segment, and carrying out leading recording to obtain the singing audio of the first song segment; after the recording is finished, generating singing receiving invitation information used for being linked to a singing receiving interface of the target song based on the first user account, the target song identification and the singing audio of the first song fragment which are logged in currently, sharing the singing receiving invitation information to a second user account to indicate that the second user triggers the singing receiving interface based on the singing receiving invitation information, and singing receiving is carried out on the first song fragment based on the singing receiving interface. By the method and the device, the singing of a plurality of users to the same song can be realized, the interactivity between song singing processes is increased, and the interactive singing requirements of the users are met.

Description

Song singing method, device, terminal and storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a song singing method, apparatus, terminal, and storage medium.
Background
At present, many music applications can provide the function of singing for the user, and through the function of singing, the user can sing along with the accompaniment of song. However, in some cases, a user may also want to invite other users to sing.
In the related art, when a user wants to invite other users to sing a song, a target song may be selected in a music application, and then a singing link of the target song may be transmitted to the other users. After receiving the singing link, other users can enter the singing interface of the target song based on the singing link and sing along with the accompaniment of the target song on the singing interface of the target song.
In the related technology, only other users can be invited to sing songs, the interactivity between the users is low, and the interactive singing requirements of the users cannot be met.
Disclosure of Invention
The embodiment of the application provides a song singing method, a song singing device, a song singing terminal and a song singing storage medium, and can be used for solving the problems that in the related technology, interactivity between users is low and interactive singing requirements of the users cannot be met. The technical scheme is as follows:
in one aspect, a song singing method is provided and applied to a first terminal, and the method includes:
when a target song-based leading interface receives a leading instruction, playing accompaniment audio of a first song segment, and carrying out leading recording to obtain singing audio of the first song segment, wherein the first song segment is any one of song segments of a target song;
after the recording is finished, if a singing receiving invitation instruction is received based on the singing receiving interface, generating singing receiving invitation information based on a currently logged first user account, a target song identifier and the singing audio of the first song segment, wherein the singing receiving invitation information is used for being linked to the singing receiving interface of the target song, and the singing receiving interface is used for guiding a user to singing the first song segment;
and sharing the singing receiving invitation information to a second user account to indicate that the second user triggers the singing receiving interface based on the singing receiving invitation information, and singing receiving is carried out on the first song segment based on the singing receiving interface.
Optionally, the target song includes a plurality of song segments, and the first song segment is a first song segment of the plurality of song segments, or the first song segment is any one selected from the plurality of song segments.
Optionally, before receiving the invitation to receive instruction of invitation to sing based on the vocal reception interface, the method further includes:
displaying a reward setting interface, wherein the reward setting interface is used for setting the singing receiving reward to be issued;
when a setting completion instruction is received based on the reward setting interface, acquiring information of the set and to-be-issued singing receiving reward;
the generating of the singing receiving invitation information based on the currently logged-in first user account, the target song identification and the singing audio of the first song segment comprises:
and generating the singing receiving invitation information based on the information of the singing receiving reward, the first user account, the target song identification and the singing audio of the first song segment.
Optionally, the reward setting interface includes a value setting entry for receiving the singing reward and a number setting entry for receiving the singing reward, and the information of the singing reward includes a value of the singing reward and the number of the singing reward.
Optionally, the singing invitation information includes a graphic identifier, and the graphic identifier carries a link address of the singing interface.
Optionally, the sharing the sing-receiving invitation information to a second user account includes:
the singing receiving invitation information is issued to an information aggregation page, so that a second terminal where a second user account is located displays the information aggregation page comprising the singing receiving invitation information; alternatively, the first and second electrodes may be,
and sending the singing receiving invitation information to a second terminal where at least one second user account in the user relationship chain is located based on the user relationship chain of the first user account.
Optionally, after sharing the sing-receiving invitation information to the second user account, the method further includes:
playing the singing receiving audio of the target song based on the leading interface or the singing receiving interface, wherein the singing receiving audio is obtained by splicing the singing audio of the first song segment and the singing audio of the second song segment, the singing audio of the second song segment is obtained by the second user by singing the first song segment based on the singing receiving interface, and the second song segment is the song segment which is included in the target song and is different from the first song segment.
In one aspect, a song singing method is provided and applied to a second terminal, and the method includes:
receiving singing receiving invitation information shared by a first user account, wherein the singing receiving invitation information is generated by a first terminal where the first user account is located based on the first user account, a target song identifier and a singing audio of a first song fragment, the first song fragment is any song fragment of a target song, and the singing receiving invitation information is used for being linked to a singing receiving interface of the target song;
displaying the singing receiving interface based on the singing receiving invitation information;
and when a singing receiving instruction is received based on the singing receiving interface, playing the accompaniment audio of a second song segment, and carrying out singing receiving recording to obtain the singing audio of the second song segment, wherein the second song segment is the song segment which is included in the target song and is different from the first song segment.
Optionally, after the performing the record of receiving singing to obtain the singing audio of the second song clip, the method further includes:
and acquiring a singing receiving reward, wherein the singing receiving reward is set by the first user account or a server.
Optionally, before obtaining the singing receiving reward, the method further includes:
determining whether the singing audio of the second song segment meets a reward condition;
and if the singing audio of the second song segment meets the reward condition, acquiring the singing receiving reward.
Optionally, the obtaining of the singing receiving reward if the singing audio of the second song segment meets the reward condition includes:
if the singing audio of the second song segment meets the reward condition, displaying a reward acquisition option in the singing receiving interface;
and when a trigger instruction for the reward acquisition option is received, acquiring a target singing receiving reward from at least one singing receiving reward, wherein the target singing receiving reward is any one of the at least one singing receiving reward, and the at least one singing receiving reward is set by the first user account or the server for the singing receiving operation of the target song.
Optionally, after obtaining the target singing receiving bonus from the at least one singing receiving bonus, the method further includes:
and displaying the acquisition information of the at least one singing receiving reward, wherein the acquisition information comprises the value of each singing receiving reward in the at least one singing receiving reward and an acquirer.
In one aspect, an apparatus for singing a song is provided, and is applied to a first terminal, and the apparatus includes:
the recording module is used for playing the accompaniment audio of a first song segment when a target song-based leading interface receives a leading instruction, and carrying out leading recording to obtain the singing audio of the first song segment, wherein the first song segment is any one of the song segments of the target song;
the generating module is used for generating singing receiving invitation information based on a currently logged first user account, a target song identifier and a singing audio of the first song segment after recording is finished, wherein the singing receiving invitation information is used for being linked to a singing receiving interface of the target song, and the singing receiving interface is used for guiding a user to sing the first song segment;
and the sharing module is used for sharing the singing receiving invitation information to a second user account so as to indicate that the second user triggers the singing receiving interface based on the singing receiving invitation information and sings the first song segment based on the singing receiving interface.
Optionally, the target song includes a plurality of song segments, and the first song segment is a first song segment of the plurality of song segments, or the first song segment is any one selected from the plurality of song segments.
Optionally, the generating module is configured to:
displaying a reward setting interface, wherein the reward setting interface is used for setting the singing receiving reward to be issued;
when a setting completion instruction is received based on the reward setting interface, acquiring information of the set and to-be-issued singing receiving reward;
and generating the singing receiving invitation information based on the information of the singing receiving reward, the first user account, the target song identification and the singing audio of the first song segment.
Optionally, the reward setting interface includes a value setting entry for receiving the singing reward and a number setting entry for receiving the singing reward, and the information of the singing reward includes a value of the singing reward and the number of the singing reward.
Optionally, the singing invitation information includes a graphic identifier, and the graphic identifier carries a link address of the singing interface.
Optionally, the sharing module is configured to:
the singing receiving invitation information is issued to an information aggregation page, so that a second terminal where a second user account is located displays the information aggregation page comprising the singing receiving invitation information; alternatively, the first and second electrodes may be,
and sending the singing receiving invitation information to a second terminal where at least one second user account in the user relationship chain is located based on the user relationship chain of the first user account.
Optionally, the apparatus further comprises:
and the playing module is used for playing the singing receiving audio of the target song based on the leading interface or the singing receiving interface, wherein the singing receiving audio is obtained by splicing the singing audio of the first song segment and the singing audio of the second song segment, the singing audio of the second song segment is obtained by the second user by singing the first song segment based on the singing receiving interface, and the second song segment is a song segment which is included in the target song and is different from the first song segment.
In one aspect, an apparatus for singing a song is provided, and is applied to a second terminal, and the apparatus includes:
the receiving module is used for receiving singing receiving invitation information shared by a first user account, wherein the singing receiving invitation information is generated by a first terminal where the first user account is located based on the first user account, a target song identifier and a singing audio of a first song fragment, the first song fragment is any song fragment of a target song, and the singing receiving invitation information is used for being linked to a singing receiving interface of the target song;
the display module is used for displaying the singing receiving interface based on the singing receiving invitation information;
and the recording module is used for playing the accompaniment audio of a second song segment when a singing receiving instruction is received based on the singing receiving interface, and performing singing receiving recording to obtain the singing audio of the second song segment, wherein the second song segment is a song segment which is included in the target song and is different from the first song segment.
Optionally, the apparatus further comprises:
and the obtaining module is used for obtaining a singing receiving reward, and the singing receiving reward is set by the first user account or the server.
Optionally, the obtaining module is configured to:
a determining unit, configured to determine whether the singing audio of the second song segment meets a reward condition;
and the obtaining unit is used for obtaining the singing receiving reward if the singing audio of the second song fragment meets the reward condition.
Optionally, the obtaining unit is configured to:
if the singing audio of the second song segment meets the reward condition, displaying a reward acquisition option in the singing receiving interface;
and when a trigger instruction for the reward acquisition option is received, acquiring a target singing receiving reward from at least one singing receiving reward, wherein the target singing receiving reward is any one of the at least one singing receiving reward, and the at least one singing receiving reward is set by the first user account or the server for the singing receiving operation of the target song.
Optionally, the apparatus further comprises:
and the display module is used for displaying the acquisition information of the at least one singing receiving reward, and the acquisition information comprises the value of each singing receiving reward in the at least one singing receiving reward and an acquirer.
In one aspect, a terminal is provided, and the terminal includes:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform any of the song singing methods described above.
In one aspect, a non-transitory computer-readable storage medium is provided, wherein instructions of the storage medium, when executed by a processor of a terminal, enable the terminal to perform any of the above song singing methods.
In one aspect, a computer program product is provided for implementing any of the above song singing methods when executed.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the user can sing the target song, namely singing one song segment of the target song, then the user sends the invitation for singing to other users, and invites other users to sing the target song, namely singing other song segments of the target song, so that the singing of a plurality of users to the same song can be realized, the interactivity between song singing processes is increased, and the interactive singing requirements of the users are met.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a singing receiving system provided in an embodiment of the present application;
fig. 2 is a schematic diagram of another song singing system provided in the embodiment of the present application;
fig. 3 is a flowchart of a song singing method provided in an embodiment of the present application;
FIG. 4 is a diagram of a leading interface of a target song according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a leader recording interface according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a transcription synthesis interface provided by an embodiment of the present application;
fig. 7 is a schematic diagram of a vocal guidance interface after a vocal guidance recording provided in the embodiment of the present application is ended;
FIG. 8 is a schematic diagram of a setup interface for receiving a red envelope according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of singing invitation information provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of a singing interface for a target song according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a record interface provided in an embodiment of the present application;
fig. 12 is a schematic diagram of a red envelope grabbing interface according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of a get red envelope interface provided by an embodiment of the present application;
fig. 14 is a block diagram of a song singing method provided in an embodiment of the present application;
FIG. 15 is a block diagram of another song singing method provided by an embodiment of the present application;
fig. 16 is a block diagram of a terminal according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, an application scenario of the embodiments of the present application will be described.
The song singing method provided by the embodiment of the application is applied to a song singing receiving scene, wherein the song singing receiving scene comprises a leader and a receiver, the leader can select a target song from a singing receiving application to sing, namely singing a song fragment of the target song, and then a singing receiving invitation is issued. The singer is used for singing receiving the song fragments sung by the singer based on the invitation of singing receiving of the singer, so that singing receiving of multiple users to the same song is realized, interactivity among song singing processes is increased, and interactive singing requirements of the users are met. The singing receiving application is used for providing a song singing receiving service for a user, and may be a client application, a web application, an applet application, or the like, which is not limited in the embodiment of the present application.
Furthermore, in a song singing receiving scene, when the singer leader issues a singing receiving invitation, a singing receiving reward can be set, and after the singer leader completes song singing receiving, the singing receiving reward set by the singer leader can be picked up. The receiving reward may be a virtual article, and the virtual article may be a virtual gift, virtual currency, or an electronic red envelope, which is not limited in the embodiment of the present application.
Next, an implementation environment related to the embodiments of the present application will be described.
Fig. 1 is a schematic diagram of a sung system provided in the embodiment of the present application, and as shown in fig. 1, the system includes a first terminal 10 and a second terminal 20, and the first terminal 10 and the second terminal 20 may be connected through a wired network or a wireless network. The first terminal 10 is a terminal used by a singer, the second terminal 20 is a terminal used by a singer, and the first terminal 10 and the second terminal 20 may interact according to the song singing method provided by the embodiment of the application, so as to realize song singing among multiple users. The first terminal 10 and the second terminal 20 may be mobile phones, tablet computers, computers or the like.
Fig. 2 is a schematic diagram of another song singing system provided in the embodiment of the present application, and as shown in fig. 2, the system includes a first terminal 10, a server 30 and a second terminal 20. The first terminal 10, the second terminal 20 and the server 30 may be connected to each other through a wired network or a wireless network. As an example, the first terminal 10 may interact with the second terminal 20 through a singing reception application to enable song singing reception. The singing receiving application is used for providing song singing receiving service for a user, and can be a client application, a webpage application, an applet application or the like, which is not limited in the embodiment of the application. For example, the first terminal 10 has a pickup application installed therein, and the leader may open the pickup application, pick up a target song through the pickup application, and issue a pickup invitation. The server 30 may be a background server of the singing receiving application.
Fig. 3 is a flowchart of a song singing method provided in an embodiment of the present application, where the method is applied to the above-mentioned singing receiving system shown in fig. 1 or fig. 2, and as shown in fig. 3, the method includes the following steps:
step 301: and the first terminal displays a leading interface of the target song.
Wherein, the target song is a song to be called. The leading interface of the target song is used for guiding the first user to lead the target song, and leading refers to singing a certain song segment of the target song. The first user is the claimant. The song segment the leader has to sing may be any song segment of the target song, such as the beginning segment of the target song, or any song segment selected from the target song for the first user, etc.
As an example, the lead interface of the target song includes lyrics of a first song segment displayed sequentially in a play time order, and a recording option.
The first song segment is a song segment to be sung by the leader, and can be any song segment of the target song. Illustratively, the target song includes a plurality of song segments, and the first song segment is a first song segment of the plurality of song segments, or the first song segment is any one selected from the plurality of song segments. For example, the first song segment may be a beginning segment of the target song, or any song segment of the target song may be selected by the user as the first song segment.
In addition, the first song segment may also be a song segment of a preset duration. For example, the first song segment may be a song segment with a preset time duration between leading start time points of the target songs, the leading time points may be preset start time points, such as the beginning of a song, and any song time point selected from the target songs may also be selected for the first user.
The recording option is used for starting an audio recording function so as to record the audio sung by the user. When the user wants to start the lead, the recording option may be triggered and the first song segment may be started to sing.
As an example, before the leading interface of the target song is displayed, a leading interface may be displayed, where the leading interface includes song identifiers of a plurality of songs, and when a song selection instruction is received based on the leading interface, a song corresponding to the song identifier selected by the song selection instruction may be determined as the target song to be led, and then the leading interface of the target song is displayed. The song selection interface is used for providing a plurality of songs for the user to select the song to be claimed. The song identifier may be a song name or a song number, and the like, which is not limited in this embodiment of the application.
As an example, after the first user opens a song pickup application in the first terminal, the first terminal may display a main interface of the song pickup application, the main interface includes a song menu, and the user may select a target song from the song menu for getting sing. After the user selects the target song, the first terminal can display the leading interface of the target song so as to guide the user to lead the target song.
Step 302: and when the first terminal receives a sing instruction based on the target song, playing the accompaniment audio of the first song segment, and carrying out the sing recording to obtain the singing audio of the first song segment.
The first user may trigger the sing instruction through a specified operation, where the specified operation may be a triggering operation on a recording option in the sing interface, and may also be a voice operation or a gesture operation, and the like, which is not limited in this embodiment of the present application. The accompaniment audio of the first song segment may be the accompaniment audio in the accompaniment pattern, or may be the accompaniment audio in the original singing pattern.
As an example, the lead interface of the target song includes a recording option and lyrics of a first song segment sequentially displayed according to a playing time sequence, and when a trigger operation on the recording option is detected, the lead instruction is determined to be received.
When the target song-based leading interface receives a leading instruction, the first terminal can play the accompaniment audio of the first song segment and can also sequentially display the lyrics of the first song segment according to the playing time sequence of the first song segment, so that the first user can sing the first song segment according to the accompaniment audio and the lyrics of the first song segment. In addition, after receiving the sing instruction, the first terminal can start the audio recording function to record the sing so as to record the audio singing by the first user.
Referring to fig. 4, fig. 4 is a schematic diagram of a vocal interface of a target song according to an embodiment of the present application, as shown in fig. 4, the vocal interface includes lyrics and a recording option, and when a first user wants to start vocal, the first user can click the recording option to start vocal, so that the first terminal records a vocal performance of the first user. After the first user clicks the recording option and performs the vocal guidance, the first terminal may display a vocal guidance recording interface shown in fig. 5, and may display a recording synthesis interface shown in fig. 6 after the recording is completed.
Step 303: after the recording is finished, if a singing receiving invitation instruction is received on the basis of a singing receiving interface, the first terminal generates singing receiving invitation information on the basis of a currently logged first user account, a target song identifier and a singing audio of a first song segment, the singing receiving invitation information is used for being linked to a singing receiving interface of the target song, and the singing receiving interface is used for guiding a user to perform singing receiving on the first song segment.
The target song identifier may be a name or a number of the target song, and the like, which is not limited in the embodiment of the present application.
As an example, after the recording is finished, the first terminal may synthesize the recorded audio and the accompaniment audio of the first song clip to obtain the singing audio of the first song clip.
The singing receiving invitation instruction can be triggered by a user through a specified operation, the specified operation can be a touch screen operation, a voice operation or a gesture operation, and the like, which is not limited in the embodiment of the application.
As an example, after the recording is finished, the first terminal may display a singing invitation entry in the singing reception interface, and when a trigger operation on the singing reception invitation entry is received, it may be determined that a singing reception invitation instruction is received. For example, the answer invitation entry may be an invite buddy option.
As another example, after the recording is finished, the first terminal may further display a reward setting interface, where the reward setting interface is used to set a to-be-issued singing receiving reward, and when a setting completion instruction is received based on the reward setting interface, obtain information of the to-be-issued singing receiving reward after the setting is completed. And then, when an invitation instruction is received based on the singing receiving interface, generating singing receiving invitation information based on the information of the singing receiving reward, the first user account, the target song identification and the singing audio of the first song segment.
The singing receiving reward may be a virtual article, or may also be a score or a grade, and the like, which is not limited in the embodiment of the present application. The virtual article can be a virtual gift, virtual currency or an electronic red packet and the like, the electronic red packet can be a common red packet (the value of each electronic red packet is the same), and the electronic red packet can also be a spelling-style red packet (the value of each electronic red packet is randomly generated), and the embodiment of the application does not limit the value.
As one example, the reward setting interface may be configured to set at least one of a value of a pickup reward and a number of the pickup reward, and the information of the pickup reward includes at least one of the value of the pickup reward and the number of the pickup reward. The value of the singing receiving reward can be the currency value of virtual currency or the red purse money number of the electronic red purse and the like. For example, the reward setting interface comprises a value setting entry and a number setting entry of the singing receiving reward, and the information of the singing receiving reward comprises the value of the singing receiving reward and the number of the singing receiving reward.
As an example, when the singing receiving reward is an electronic red packet, after the information of the set singing receiving reward to be issued is acquired, the first terminal may further generate the singing receiving red packet based on the information of the singing receiving reward.
The singing invitation information may be in a web page link form, a picture form, or the like, which is not limited in the embodiment of the present application. As an example, the vocal reception invitation information includes a graphical identifier carrying a link address of the vocal reception interface. For example, the graphic identifier may be a bar code or a two-dimensional code. For example, the invitation to receive may be in the form of a picture, and the picture includes the graphical identifier. Taking the singing receiving reward as a virtual red envelope as an example, the singing receiving invitation information can be in an electronic red envelope form, and the red envelope icon comprises the graphic identifier.
Taking a singing receiving reward as an electronic red envelope as an example, please refer to fig. 7, and fig. 7 is a schematic view of a singing interface after the completion of the singing recording provided in the embodiment of the present application, where as shown in fig. 7, the singing interface includes a "go to red envelope" option, and when a trigger operation of the first user on the "go to red envelope" option is detected, the red envelope setting interface shown in fig. 8 may be displayed, and may be used to set the total amount of the red envelope and the number of the red envelopes. After the first user sets the total amount of the red envelope and the number of the red envelopes, if the triggering operation of the option of 'generating the pickup red envelope' in fig. 8 is detected, the pickup red envelope can be generated, and the singing invitation information shown in fig. 9 is displayed. As shown in fig. 9, the singing invitation information is a sharing graph, and the sharing graph includes a red envelope icon and a friend invitation option. The singing receiving red envelope icon comprises a two-dimensional code, and the two-dimensional code carries a link address of a singing receiving interface. The invite friend option is used to share singing invitation information to friends to invite the friends to sing.
It should be noted that, in the process of performing the vocal guidance recording by the first terminal, or after the recording is finished, when the vocal guidance instruction is received, the vocal guidance recording may be performed again. The vocal commanding instruction can be triggered by a user through a specified operation, for example, the specified operation can be a triggering operation of a vocal option in the vocal leading interface.
Step 304: the first terminal shares the singing receiving invitation information to a second user account to indicate that the second user triggers a singing receiving interface based on the singing receiving invitation information, and the first song segment is sung-received based on the singing receiving interface.
The first user account is an account of the first user, i.e., an account of the claimant. The second user account is the account of the second user, i.e. the account of the recipient. The user account may be a user name or an Identity (ID), and the like, which is not limited in this embodiment.
As an example, the first terminal may share the sing-receiving invitation information to the second user account when receiving the sharing instruction of the sing-receiving invitation information. The sharing instruction can be triggered by a user through a specified operation, and the specified operation can be a touch screen operation, a voice operation or a gesture operation, and the like, which is not limited in the embodiment of the application.
As an example, after the singing invitation information is generated, a singing invitation entry may be displayed in the singing invitation interface, and when a trigger operation of the singing invitation entry is detected, it may be determined that a sharing instruction of the singing invitation information is received. For example, the invitation to receive may be the "invite friend" option shown in fig. 9.
As an example, the operation of sharing the sing-receiving invitation information to the second user account includes the following two implementation manners:
the first implementation mode comprises the following steps: and issuing the singing receiving invitation information to an information aggregation page so that a second terminal where a second user account is located displays the information aggregation page comprising the singing receiving invitation information.
That is, the singing invitation information can be issued to the information sharing platform, so that other users can view the singing invitation information from the information sharing platform.
The information aggregation page is used for aggregating information published by a user, for example, the information aggregation page may be an information aggregation page of an instant messaging application, such as a circle of friends, or an information aggregation page of a social application, such as a microblog or an information square.
The second implementation mode comprises the following steps: and sending the singing receiving invitation information to a second terminal where at least one second user account in the user relationship chain is located based on the user relationship chain of the first user account.
And at least one second user account in the user relationship chain can be selected from the user relationship chain by the first user. In addition, at least one second user account in the user relationship chain may include a personal account and may also include a group account, that is, the sing-receiving invitation information may be sent to a certain friend and may also be sent to a group.
Step 305: and the second terminal receives the singing receiving invitation information shared by the first user account.
The singing receiving invitation information is generated by a first terminal where the first user account is located based on the first user account, the target song identification and the singing audio of the first song segment, the first song segment is any song segment of the target song, and the singing receiving invitation information is used for being linked to a singing receiving interface of the target song.
Step 306: and the second terminal displays a singing receiving interface based on the singing receiving invitation information.
The singing receiving interface is used for guiding a second user to receive the first song segment, namely guiding the second user to sing the second song segment. The second song segment is a song segment that is included in the target song and is different from the first song segment. For example, the second song segment is a song segment subsequent to and contiguous with the first song segment, or any song segment selected from the target songs except the first song segment.
As an example, the singing receiving invitation information is in a form of a web page link, and the second user can enter the singing receiving interface by clicking the singing receiving invitation information. That is, when the second terminal detects the trigger operation of the sing-receiving invitation information, the sing-receiving interface can be displayed.
As another example, the singing receiving link information includes a graphic identifier, and the second terminal may obtain a link address of the singing receiving interface by scanning the graphic identifier, and then display the singing receiving interface based on the link address of the singing receiving interface. For example, the singing receiving link information is in a picture form, the picture includes the graphical identifier, and then the user can trigger the second terminal to identify the graphical identifier by long-time pressing of the picture, so as to enter the singing receiving interface.
As an example, the lead interface of the target song includes lyrics of a second song segment, which are sequentially displayed in the order of play time, and a recording option. When the user wants to start singing, the recording option may be triggered and the second song segment may be started.
Step 307: and when receiving the singing receiving instruction based on the singing receiving interface, the second terminal plays the accompaniment audio of the second song segment and carries out singing receiving recording to obtain the singing audio of the second song segment.
The singing receiving instruction can be triggered by a second user through a specified operation, the specified operation can be a triggering operation of a recording option in a singing receiving interface, and can also be a voice operation or a gesture operation, and the like, which is not limited in the embodiment of the application. The accompaniment audio of the second song segment may be the accompaniment audio in the accompaniment pattern, or may be the accompaniment audio in the original singing pattern, which is not limited in the embodiment of the present application.
As an example, the lead interface of the target song includes lyrics of a second song segment, which are sequentially displayed in the order of play time, and a recording option. When the triggering operation of the recording option is detected, the receiving instruction is determined to be received.
When the singing receiving instruction is received on the target song-based singing receiving interface, the second terminal can play the accompaniment audio of the second song segment and can also sequentially display the lyrics of the second song segment according to the playing time sequence of the second song segment, so that the second user can sing the second song segment according to the accompaniment audio and the lyrics of the second song segment. In addition, after receiving the order of singing reception, the second terminal can start the audio recording function to record the singing reception sound so as to record the audio performed by the second user.
Referring to fig. 10, fig. 10 is a schematic diagram of a song pickup interface of a target song according to an embodiment of the present application, as shown in fig. 10, the song pickup interface includes lyrics and a recording option, and when a second user wants to start to pick up a song, the second user can click the recording option to start singing, so that the second terminal records the singing of the second user. After the second user clicks the recording option and performs the vocal reception, the second terminal may display a vocal reception recording interface shown in fig. 11.
It should be noted that, in the process of performing the record of receiving singing by the second terminal, or after the recording is finished, when the command of singing is received, the record of receiving singing may be performed again. The singing receiving instruction can be triggered by the second user through a specified operation, for example, the specified operation can be a triggering operation of a singing receiving option in the singing interface.
In addition, the second terminal carries out the singing receiving recording, and can also obtain the singing receiving reward after the singing audio of the second song segment is obtained. The receiving reward may be set by the first user account, or may be set by the server, which is not limited in this embodiment of the present application.
As an example, after the second terminal performs a recording of singing, and obtains the singing audio of the second song segment, the reward obtaining option may be displayed in the singing interface, and when the trigger operation on the reward obtaining option is detected, the singing reward can be obtained. For example, taking the electronic bonus package as the singing receiving bonus, the bonus obtaining option can be a bonus package grabbing option.
Further, before obtaining the singing receiving reward, the second terminal may further determine whether the singing audio of the second song segment meets the singing condition, and obtain the singing reward when the singing audio of the second song segment meets the singing condition.
The singing condition may be that the singing score of the singing audio of the second song segment is greater than or equal to a score threshold, or the singing grade of the singing audio of the second song segment is greater than or equal to a preset level. The scoring threshold may be preset, and may be 400 points or 500 points, for example. The preset level may be preset, for example, the preset level may be an S level or an SS level.
The singing grade of the singing audio of the second song segment can be determined according to the singing grade of the singing audio of the second song segment. The singing score of the singing audio of the second song segment can be obtained by scoring the singing audio of the second song segment by the second terminal, or can be obtained by scoring the singing audio of the second song segment by the server, which is not limited in the embodiment of the application.
Further, if the singing audio of the second song segment meets the singing condition, displaying a reward acquisition option in the singing receiving interface; when a trigger instruction for obtaining the reward option is received, a target singing receiving reward is obtained from the at least one singing receiving reward, the target singing receiving reward is any one of the at least one singing receiving reward, and the at least one singing receiving reward is set by the first user account or the server for the singing receiving operation of the target song.
For example, if the singing receiving reward is an electronic red envelope, the reward acquiring option can be a red envelope grabbing option, and the target singing receiving reward can be any one of the grabbed red envelopes. Referring to fig. 12, when the singing score reaches 488 minutes, the red envelope grabbing option may be displayed in the singing receiving interface, and when a triggering instruction for the red envelope grabbing option is received, a red envelope may be allocated to the second user account.
Further, after the target singing receiving reward is obtained from the at least one singing receiving reward, the second terminal can also display obtaining information of the at least one singing receiving reward, and the obtaining information comprises the value of each singing receiving reward in the at least one singing receiving reward and an acquirer. For example, if the singing receiving reward is an electronic red packet, the acquisition information of at least one singing receiving reward may include the amount of money of each red packet and the acquirer of each red packet. For example, acquisition information of the singing receiving bonus as shown in fig. 13 may be displayed.
In addition, after the first terminal shares the singing receiving invitation information with the second user account, when a singing receiving information query instruction is received, a singing receiving information display interface can be displayed, the singing receiving information display interface displays the singing receiving information, and the singing receiving information is used for describing the singing receiving condition of the target song. For example, the singing receiving information may include how many users have accepted the invitation to receive singing, user identifications of the users who accepted the invitation, and singing receiving audio information of the users who received the invitation. For example, if the singing receiving reward is an electronic red envelope, the singing receiving information may include how many users have picked up the singing receiving red envelope, a user identifier for picking up the red envelope, and singing receiving audio information of the users for picking up the red envelope.
In addition, after the second user account receives the singing, the first terminal can also play the singing receiving audio of the target song based on the leading interface or the singing receiving interface, wherein the singing receiving audio is obtained by splicing the singing audio of the first song segment and the singing audio of the second song segment.
For example, after the second user account receives the singing, the server may splice the singing audio of the first song segment and the singing audio of the second song segment to obtain a singing receiving audio of the target song, and release the singing receiving audio of the target song to the chosing interface or the singing receiving interface, so that the user may play the singing receiving audio of the target song based on the chosing interface or the singing receiving interface.
In addition, after the second user account finishes singing reception, the second user can also invite other users to continue singing reception of the target song, so that singing reception of the target song is realized. For example, if the singing receiving invitation instruction is received based on the singing receiving interface, the second terminal may further generate the singing receiving invitation information based on the singing audio of the first song segment, the second user account, and the singing audio of the second song segment, where the singing receiving invitation information is used to link to the singing receiving interface of the target song, and the singing receiving interface is used to guide the user to perform singing receiving on the second song segment.
In addition, if the first user shares the singing receiving invitation information of the target song with a plurality of users, after each user accepting the sharing obtains the singing receiving invitation information, the first song segment can be respectively sung-received to generate the singing receiving audio of the target song, the singing receiving information can also be shared, and the singing receiving is carried out based on the singing receiving information of other users, so that the singing receiving with other users is realized.
As an example, if the first user shares the invitation information of the target song to receive singing to multiple users, for each of the multiple users, each user may select a designated song segment from multiple song segments of the target song to sing, where the designated song segment is a song segment that is not sung by other users. That is, each user can only sing song segments that other users have not performed, and if other users have selected to perform a song segment, the user cannot select the song segment to perform singing again, and needs to select song segments that other users have not selected yet to perform singing. After the multiple users complete singing, the server may splice the singing audio of the first user and the multiple users to obtain the singing audio of the target song.
In the embodiment of the application, the user can sing the target song, namely singing one song segment of the target song, then the user sends the invitation for singing to other users, and invites other users to sing the target song, namely singing other song segments of the target song, so that the singing of a plurality of users to the same song can be realized, the interactivity between song singing processes is increased, and the interactive singing requirements of the users are met.
Fig. 14 is a block diagram of a song singing method provided in an embodiment of the present application, where the apparatus may be integrated in a first terminal, and as shown in fig. 14, the apparatus includes:
the recording module 1401 is configured to play an accompaniment audio of a first song clip when a target song-based leading interface receives a leading instruction, and perform leading recording to obtain a singing audio of the first song clip, where the first song clip is any one of song clips of a target song;
a generating module 1402, configured to generate, after the recording is finished, a singing invitation information based on a currently logged first user account, a target song identifier, and a singing audio of the first song segment, where the singing invitation information is used to link to a singing interface of the target song, and the singing interface is used to guide a user to sing the first song segment;
a sharing module 1403, configured to share the singing receiving invitation information with a second user account, so as to instruct the second user to trigger the singing receiving interface based on the singing receiving invitation information, and to perform singing receiving on the first song segment based on the singing receiving interface.
Optionally, the target song includes a plurality of song segments, and the first song segment is a first song segment of the plurality of song segments, or the first song segment is any one selected from the plurality of song segments.
Optionally, the generating module 1402 is configured to:
displaying a reward setting interface, wherein the reward setting interface is used for setting the singing receiving reward to be issued;
when a setting completion instruction is received based on the reward setting interface, acquiring information of the set and to-be-issued singing receiving reward;
and generating the singing receiving invitation information based on the information of the singing receiving reward, the first user account and the singing audio of the first song segment.
Optionally, the reward setting interface includes a value setting entry for receiving the singing reward and a number setting entry for receiving the singing reward, and the information of the singing reward includes a value of the singing reward and the number of the singing reward.
Optionally, the singing invitation information includes a graphic identifier, and the graphic identifier carries a link address of the singing interface.
Optionally, the sharing module 1403 is configured to:
the singing receiving invitation information is issued to an information aggregation page, so that a second terminal where a second user account is located displays the information aggregation page comprising the singing receiving invitation information; alternatively, the first and second electrodes may be,
and sending the singing receiving invitation information to a second terminal where at least one second user account in the user relationship chain is located based on the user relationship chain of the first user account.
Optionally, the apparatus further comprises:
and the playing module is used for playing the singing receiving audio of the target song based on the leading interface or the singing receiving interface, wherein the singing receiving audio is obtained by splicing the singing audio of the first song segment and the singing audio of the second song segment, the singing audio of the second song segment is obtained by the second user by singing the first song segment based on the singing receiving interface, and the second song segment is a song segment which is included in the target song and is different from the first song segment.
In the embodiment of the application, the user can sing the target song, namely singing one song segment of the target song, then the user sends the invitation for singing to other users, and invites other users to sing the target song, namely singing other song segments of the target song, so that the singing of a plurality of users to the same song can be realized, the interactivity between song singing processes is increased, and the interactive singing requirements of the users are met.
Fig. 15 is a block diagram of another song singing method provided in the embodiment of the present application, where the apparatus may be integrated in a second terminal, as shown in fig. 15, and the apparatus includes:
the receiving module 1501 is configured to receive singing receiving invitation information shared by a first user account, where the singing receiving invitation information is generated by a first terminal where the first user account is located based on the first user account, a target song identifier, and a singing audio of a first song clip, the first song clip is any one of song clips of a target song, and the singing receiving invitation information is used for linking to a singing receiving interface of the target song;
a display module 1502, configured to display the singing receiving interface based on the singing receiving invitation information;
and the recording module 1503 is configured to play an accompaniment audio of a second song clip when a singing receiving instruction is received based on the singing receiving interface, and perform a singing receiving recording to obtain a singing audio of the second song clip, where the second song clip is a song clip that is included in the target song and is different from the first song clip.
Optionally, the apparatus further comprises:
and the obtaining module is used for obtaining a singing receiving reward, and the singing receiving reward is set by the first user account or the server.
Optionally, the obtaining module is configured to:
a determining unit, configured to determine whether the singing audio of the second song segment meets a reward condition;
and the obtaining unit is used for obtaining the singing receiving reward if the singing audio of the second song fragment meets the reward condition.
Optionally, the obtaining unit is configured to:
if the singing audio of the second song segment meets the reward condition, displaying a reward acquisition option in the singing receiving interface;
and when a trigger instruction for the reward acquisition option is received, acquiring a target singing receiving reward from at least one singing receiving reward, wherein the target singing receiving reward is any one of the at least one singing receiving reward, and the at least one singing receiving reward is set by the first user account or the server for the singing receiving operation of the target song.
Optionally, the apparatus further comprises:
and the display module is used for displaying the acquisition information of the at least one singing receiving reward, and the acquisition information comprises the value of each singing receiving reward in the at least one singing receiving reward and an acquirer.
In the embodiment of the application, the user can sing the target song, namely singing one song segment of the target song, then the user sends the invitation for singing to other users, and invites other users to sing the target song, namely singing other song segments of the target song, so that the singing of a plurality of users to the same song can be realized, the interactivity between song singing processes is increased, and the interactive singing requirements of the users are met.
It should be noted that: in the song singing apparatus provided in the above embodiment, when a song is received, only the division of the functional modules is taken as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the functions described above. In addition, the song singing apparatus and the song singing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 16 is a block diagram of a terminal 1600 according to an embodiment of the present disclosure. The terminal 1600 may be the first terminal or the second terminal in the above embodiments. The terminal 1600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, terminal 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1601 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1601 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be non-transitory. The memory 1602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1602 is used to store at least one instruction for execution by processor 1601 to implement a song singing method provided by method embodiments of the present application.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1604, a touch screen display 1605, a camera 1606, audio circuitry 1607, a positioning component 1608, and a power supply 1609.
Peripheral interface 1603 can be used to connect at least one I/O (Input/Output) related peripheral to processor 1601 and memory 1602. In some embodiments, processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1601, the memory 1602 and the peripheral device interface 1603 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1604 converts the electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1604 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 1605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1605 is a touch display screen, the display screen 1605 also has the ability to capture touch signals on or over the surface of the display screen 1605. The touch signal may be input to the processor 1601 as a control signal for processing. At this point, the display 1605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1605 may be one, providing the front panel of the terminal 1600; in other embodiments, the display screens 1605 can be at least two, respectively disposed on different surfaces of the terminal 1600 or in a folded design; in still other embodiments, display 1605 can be a flexible display disposed on a curved surface or a folded surface of terminal 1600. Even further, the display 1605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1606 can also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing or inputting the electric signals to the radio frequency circuit 1604 to achieve voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and disposed at different locations of terminal 1600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1607 may also include a headphone jack.
The positioning component 1608 is configured to locate a current geographic location of the terminal 1600 for navigation or LBS (location based Service). The positioning component 1608 may be a positioning component based on the united states GPS (global positioning System), the chinese beidou System, the russian graves System, or the european union galileo System.
Power supply 1609 is used to provide power to the various components of terminal 1600. Power supply 1609 may be alternating current, direct current, disposable or rechargeable. When power supply 1609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1600 also includes one or more sensors 1610. The one or more sensors 1610 include, but are not limited to: acceleration sensor 1611, gyro sensor 1612, pressure sensor 1613, fingerprint sensor 1614, optical sensor 1615, and proximity sensor 1616.
Acceleration sensor 1611 may detect acceleration in three coordinate axes of a coordinate system established with terminal 1600. For example, the acceleration sensor 1611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1601 may control the touch display screen 1605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1611. The acceleration sensor 1611 may also be used for acquisition of motion data of a game or a user.
Gyroscope sensor 1612 can detect the organism direction and the turned angle of terminal 1600, and gyroscope sensor 1612 can gather the 3D action of user to terminal 1600 with acceleration sensor 1611 in coordination. From the data collected by the gyro sensor 1612, the processor 1601 may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1613 may be disposed on a side bezel of terminal 1600 and/or underlying touch display 1605. When the pressure sensor 1613 is disposed on the side frame of the terminal 1600, a user's holding signal of the terminal 1600 can be detected, and the processor 1601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1613. When the pressure sensor 1613 is disposed at the lower layer of the touch display 1605, the processor 1601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1614 is configured to collect a fingerprint of the user, and the processor 1601 is configured to identify the user based on the fingerprint collected by the fingerprint sensor 1614, or the fingerprint sensor 1614 is configured to identify the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1601 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1614 may be disposed on the front, back, or side of the terminal 1600. When a physical key or vendor Logo is provided on the terminal 1600, the fingerprint sensor 1614 may be integrated with the physical key or vendor Logo.
The optical sensor 1615 is used to collect ambient light intensity. In one embodiment, the processor 1601 may control the display brightness of the touch display screen 1605 based on the ambient light intensity collected by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1605 is increased; when the ambient light intensity is low, the display brightness of the touch display 1605 is turned down. In another embodiment, the processor 1601 may also dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity collected by the optical sensor 1615.
A proximity sensor 1616, also referred to as a distance sensor, is typically disposed on the front panel of terminal 1600. The proximity sensor 1616 is used to collect the distance between the user and the front surface of the terminal 1600. In one embodiment, the processor 1601 controls the touch display 1605 to switch from the light screen state to the rest screen state when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually decreased; when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually increased, the touch display 1605 is controlled by the processor 1601 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 16 is not intended to be limiting of terminal 1600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, a computer-readable storage medium is also provided, which has instructions stored thereon, which when executed by a processor implement the song singing method described above.
In an exemplary embodiment, a computer program product is also provided for implementing the above-described song singing method when the computer program product is executed.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (16)

1. A song singing method is applied to a first terminal and comprises the following steps:
when a target song-based leading interface receives a leading instruction, playing accompaniment audio of a first song segment, and carrying out leading recording to obtain singing audio of the first song segment, wherein the first song segment is any one of song segments of a target song;
after the recording is finished, if a singing receiving invitation instruction is received based on the singing receiving interface, generating singing receiving invitation information based on a currently logged first user account, a target song identifier and the singing audio of the first song segment, wherein the singing receiving invitation information is used for being linked to the singing receiving interface of the target song, and the singing receiving interface is used for guiding a user to singing the first song segment;
and sharing the singing receiving invitation information to a second user account to indicate that the second user triggers the singing receiving interface based on the singing receiving invitation information, and singing receiving is carried out on the first song segment based on the singing receiving interface.
2. The method of claim 1, wherein the target song comprises a plurality of song segments, and wherein the first song segment is a first song segment of the plurality of song segments or wherein the first song segment is any selected song segment of the plurality of song segments.
3. The method of claim 1, wherein before receiving a reception invitation instruction based on the reception interface, further comprising:
displaying a reward setting interface, wherein the reward setting interface is used for setting the singing receiving reward to be issued;
when a setting completion instruction is received based on the reward setting interface, acquiring information of the set and to-be-issued singing receiving reward;
the generating of the singing receiving invitation information based on the currently logged-in first user account, the target song identification and the singing audio of the first song segment includes:
and generating the singing receiving invitation information based on the information of the singing receiving reward, the first user account, the target song identification and the singing audio of the first song segment.
4. The method according to claim 3, wherein the reward setting interface comprises a value setting entry of a singing receiving reward and a number setting entry of the singing receiving reward, and the information of the singing receiving reward comprises the value of the singing receiving reward and the number of the singing receiving reward.
5. The method of claim 1, wherein the singing invitation information includes a graphical identifier, and the graphical identifier carries a link address of the singing interface.
6. The method according to any one of claims 1 to 5, wherein the sharing the invitation to receive singing information to a second user account includes:
the singing receiving invitation information is issued to an information aggregation page, so that a second terminal where a second user account is located displays the information aggregation page comprising the singing receiving invitation information; alternatively, the first and second electrodes may be,
and sending the singing receiving invitation information to a second terminal where at least one second user account in the user relationship chain is located based on the user relationship chain of the first user account.
7. The method according to any one of claims 1 to 5, wherein after sharing the invitation to receive singing information to the second user account, the method further comprises:
playing the singing receiving audio of the target song based on the leading interface or the singing receiving interface, wherein the singing receiving audio is obtained by splicing the singing audio of the first song segment and the singing audio of the second song segment, the singing audio of the second song segment is obtained by the second user by singing the first song segment based on the singing receiving interface, and the second song segment is the song segment which is included in the target song and is different from the first song segment.
8. A song singing method is applied to a second terminal and comprises the following steps:
receiving singing receiving invitation information shared by a first user account, wherein the singing receiving invitation information is generated by a first terminal where the first user account is located based on the first user account, a target song identifier and a singing audio of a first song fragment, the first song fragment is any song fragment of a target song, and the singing receiving invitation information is used for being linked to a singing receiving interface of the target song;
displaying the singing receiving interface based on the singing receiving invitation information;
and when a singing receiving instruction is received based on the singing receiving interface, playing the accompaniment audio of a second song segment, and carrying out singing receiving recording to obtain the singing audio of the second song segment, wherein the second song segment is the song segment which is included in the target song and is different from the first song segment.
9. The method of claim 8, wherein after the recording of the singing meeting to obtain the singing audio of the second song segment, further comprising:
and acquiring a singing receiving reward, wherein the singing receiving reward is set by the first user account or a server.
10. The method of claim 9, wherein obtaining the vocal fold reward comprises:
determining whether the singing audio of the second song segment meets a reward condition;
and if the singing audio of the second song segment meets the reward condition, acquiring the singing receiving reward.
11. The method of claim 10, wherein obtaining a singing receiving reward if the singing audio of the second song segment meets a reward condition comprises:
if the singing audio of the second song segment meets the reward condition, displaying a reward acquisition option in the singing receiving interface;
and when a trigger instruction for the reward acquisition option is received, acquiring a target singing receiving reward from at least one singing receiving reward, wherein the target singing receiving reward is any one of the at least one singing receiving reward, and the at least one singing receiving reward is set by the first user account or the server for the singing receiving operation of the target song.
12. The method of claim 11, wherein after obtaining the target singing reward from the at least one singing reward, further comprising:
and displaying the acquisition information of the at least one singing receiving reward, wherein the acquisition information comprises the value of each singing receiving reward in the at least one singing receiving reward and an acquirer.
13. An apparatus for singing a song, applied to a first terminal, the apparatus comprising:
the recording module is used for playing the accompaniment audio of a first song segment when a target song-based leading interface receives a leading instruction, and carrying out leading recording to obtain the singing audio of the first song segment, wherein the first song segment is any one of the song segments of the target song;
the generating module is used for generating singing receiving invitation information based on a currently logged first user account, a target song identifier and a singing audio of the first song segment after recording is finished, wherein the singing receiving invitation information is used for being linked to a singing receiving interface of the target song, and the singing receiving interface is used for guiding a user to sing the first song segment;
and the sharing module is used for sharing the singing receiving invitation information to a second user account so as to indicate that the second user triggers the singing receiving interface based on the singing receiving invitation information and sings the first song segment based on the singing receiving interface.
14. A song singing apparatus, applied to a second terminal, the apparatus comprising:
the receiving module is used for receiving singing receiving invitation information shared by a first user account, wherein the singing receiving invitation information is generated by a first terminal where the first user account is located based on the first user account, a target song identifier and a singing audio of a first song fragment, the first song fragment is any song fragment of a target song, and the singing receiving invitation information is used for being linked to a singing receiving interface of the target song;
the display module is used for displaying the singing receiving interface based on the singing receiving invitation information;
and the recording module is used for playing the accompaniment audio of a second song segment when a singing receiving instruction is received based on the singing receiving interface, and performing singing receiving recording to obtain the singing audio of the second song segment, wherein the second song segment is a song segment which is included in the target song and is different from the first song segment.
15. A terminal, characterized in that the terminal comprises:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the method of singing a song of any one of claims 1-7 or claims 8-12.
16. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a terminal, enable the terminal to perform the method of singing a song of any one of claims 1-7 or claims 8-12.
CN201911397437.2A 2019-12-30 2019-12-30 Song singing method, device, terminal and storage medium Active CN111131867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911397437.2A CN111131867B (en) 2019-12-30 2019-12-30 Song singing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911397437.2A CN111131867B (en) 2019-12-30 2019-12-30 Song singing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111131867A true CN111131867A (en) 2020-05-08
CN111131867B CN111131867B (en) 2022-03-15

Family

ID=70505329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911397437.2A Active CN111131867B (en) 2019-12-30 2019-12-30 Song singing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111131867B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111404808A (en) * 2020-06-02 2020-07-10 腾讯科技(深圳)有限公司 Song processing method
CN112989109A (en) * 2021-04-14 2021-06-18 腾讯音乐娱乐科技(深圳)有限公司 Music structure analysis method, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130031220A1 (en) * 2011-03-17 2013-01-31 Coverband, Llc System and Method for Recording and Sharing Music
CN105006234A (en) * 2015-05-27 2015-10-28 腾讯科技(深圳)有限公司 Karaoke processing method and apparatus
CN105118500A (en) * 2015-06-05 2015-12-02 福建凯米网络科技有限公司 Singing evaluation method, system and terminal
US10178365B1 (en) * 2017-08-25 2019-01-08 Vid Inc. System and method for combining audio tracks with video files
CN109525568A (en) * 2018-11-02 2019-03-26 广州酷狗计算机科技有限公司 Requesting songs method and device
CN110209871A (en) * 2019-06-17 2019-09-06 广州酷狗计算机科技有限公司 Song comments on dissemination method and device
CN110349559A (en) * 2019-07-12 2019-10-18 广州酷狗计算机科技有限公司 Carry out audio synthetic method, device, system, equipment and storage medium
CN110491358A (en) * 2019-08-15 2019-11-22 广州酷狗计算机科技有限公司 Carry out method, apparatus, equipment, system and the storage medium of audio recording

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130031220A1 (en) * 2011-03-17 2013-01-31 Coverband, Llc System and Method for Recording and Sharing Music
CN105006234A (en) * 2015-05-27 2015-10-28 腾讯科技(深圳)有限公司 Karaoke processing method and apparatus
CN105118500A (en) * 2015-06-05 2015-12-02 福建凯米网络科技有限公司 Singing evaluation method, system and terminal
US10178365B1 (en) * 2017-08-25 2019-01-08 Vid Inc. System and method for combining audio tracks with video files
CN109525568A (en) * 2018-11-02 2019-03-26 广州酷狗计算机科技有限公司 Requesting songs method and device
CN110209871A (en) * 2019-06-17 2019-09-06 广州酷狗计算机科技有限公司 Song comments on dissemination method and device
CN110349559A (en) * 2019-07-12 2019-10-18 广州酷狗计算机科技有限公司 Carry out audio synthetic method, device, system, equipment and storage medium
CN110491358A (en) * 2019-08-15 2019-11-22 广州酷狗计算机科技有限公司 Carry out method, apparatus, equipment, system and the storage medium of audio recording

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
序列号查询: "微信唱歌红包,来了", 《搜狐网》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111404808A (en) * 2020-06-02 2020-07-10 腾讯科技(深圳)有限公司 Song processing method
WO2021244257A1 (en) * 2020-06-02 2021-12-09 腾讯科技(深圳)有限公司 Song processing method and apparatus, electronic device, and readable storage medium
CN112989109A (en) * 2021-04-14 2021-06-18 腾讯音乐娱乐科技(深圳)有限公司 Music structure analysis method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111131867B (en) 2022-03-15

Similar Documents

Publication Publication Date Title
CN112672176B (en) Interaction method, device, terminal, server and medium based on virtual resources
CN109327608B (en) Song sharing method, terminal, server and system
CN108965757B (en) Video recording method, device, terminal and storage medium
CN113041625B (en) Live interface display method, device and equipment and readable storage medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN110139143B (en) Virtual article display method, device, computer equipment and storage medium
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN109771955B (en) Invitation request processing method, device, terminal and storage medium
CN113157172A (en) Barrage information display method, transmission method, device, terminal and storage medium
CN111402844B (en) Song chorus method, device and system
CN112788359A (en) Live broadcast processing method and device, electronic equipment and storage medium
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN110798327B (en) Message processing method, device and storage medium
CN111131867B (en) Song singing method, device, terminal and storage medium
CN111628925A (en) Song interaction method and device, terminal and storage medium
CN111711838A (en) Video switching method, device, terminal, server and storage medium
CN113204672B (en) Resource display method, device, computer equipment and medium
CN108055349B (en) Method, device and system for recommending K song audio
CN110337042B (en) Song on-demand method, on-demand order processing method, device, terminal and medium
CN112040267A (en) Chorus video generation method, chorus method, apparatus, device and storage medium
CN110808985B (en) Song on-demand method, device, terminal, server and storage medium
CN110519614B (en) Method, device and equipment for interaction between accounts in live broadcast room
CN111246233B (en) Video live broadcast method, device and system
CN110267114B (en) Video file playing method, device, terminal and storage medium
CN111314205A (en) Instant messaging matching method, device, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant