CN111599328A - Song synthesis method, device, equipment and storage medium - Google Patents

Song synthesis method, device, equipment and storage medium Download PDF

Info

Publication number
CN111599328A
CN111599328A CN202010442261.4A CN202010442261A CN111599328A CN 111599328 A CN111599328 A CN 111599328A CN 202010442261 A CN202010442261 A CN 202010442261A CN 111599328 A CN111599328 A CN 111599328A
Authority
CN
China
Prior art keywords
song
recorded
recorded song
segment
songs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010442261.4A
Other languages
Chinese (zh)
Other versions
CN111599328B (en
Inventor
苏裕贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202010442261.4A priority Critical patent/CN111599328B/en
Publication of CN111599328A publication Critical patent/CN111599328A/en
Application granted granted Critical
Publication of CN111599328B publication Critical patent/CN111599328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/366Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems with means for modifying or correcting the external signal, e.g. pitch correction, reverberation, changing a singer's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/005Musical accompaniment, i.e. complete instrumental rhythm synthesis added to a performed melody, e.g. as output by drum machines
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/125Medley, i.e. linking parts of different musical pieces in one single piece, e.g. sound collage, DJ mix

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

The application discloses a song synthesis method, a song synthesis device, song synthesis equipment and a song synthesis storage medium, and belongs to the technical field of audio and video processing. The method comprises the following steps: the method comprises the steps of obtaining at least two recorded songs, wherein the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account is different from the second user account. Acquiring a first recorded song segment according to the first recorded song, wherein the first recorded song segment is a song segment in the first recorded song; and acquiring a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song. And generating a chorus song according to the first recorded song segment and the second recorded song segment. The present application provides a new way of synthesizing chorus songs.

Description

Song synthesis method, device, equipment and storage medium
Technical Field
The present application relates to the field of audio and video processing technologies, and in particular, to a song synthesis method, device, apparatus, and storage medium.
Background
A singing client with a song-playing function is one of the most popular entertainment-type applications at present. After logging in a singing client, a user can select song accompaniment to perform singing and can also perform chorus with other users.
Currently, the chorus function is provided by a singing client. Generally, a singing client divides the singing roles of the song accompaniment according to the lyrics in the song accompaniment, and different singing roles correspond to different song fragments in the song accompaniment. And the singing client records song fragments sung by a certain user under a certain singing role according to different singing roles selected by different users and generates a recording file. After the song segments of all the singing roles accompanied by the songs are recorded, the singing client combines the recording files corresponding to the singing roles to generate a song, so that different users can sing a song together.
In the process of realizing chorus of a song by different users, the singing client needs to perform singing role division on song accompaniment, and can synthesize chorus songs after all song segments of the singing roles are recorded. The manner of synthesizing chorus songs is relatively single.
Disclosure of Invention
The application provides a song synthesis method, a song synthesis device, equipment and a storage medium, and can provide a new method for synthesizing chorus songs. The technical scheme is as follows:
according to an aspect of the present application, there is provided a song synthesizing method, the method including:
acquiring at least two recorded songs, wherein the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account is different from the second user account;
acquiring a first recorded song segment according to the first recorded song, wherein the first recorded song segment is a song segment in the first recorded song; acquiring a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song;
and generating a chorus song according to the first recorded song segment and the second recorded song segment.
According to another aspect of the present application, there is provided a song synthesizing apparatus, the apparatus including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring at least two recorded songs, the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account is different from the second user account;
a second obtaining module, configured to obtain a first recorded song segment according to the first recorded song, where the first recorded song segment is a song segment in the first recorded song; the second acquisition module is further used for acquiring a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song;
and the generating module is used for generating a chorus song according to the first recorded song segment and the second recorded song segment.
Optionally, the first obtaining module is configured to:
acquiring the first recorded song;
displaying at least one recommended recording song recommended according to the first recording song, wherein the recommended recording song comprises a recording song of the second user account, and the recommended recording song and the first recording song comprise the same song accompaniment;
and acquiring the second recorded song from the at least one recommended recorded song in response to a selection instruction triggered by the first selection operation.
Optionally, the first obtaining module is configured to:
displaying the recorded song of the second user account;
responding to a selection instruction triggered by a second selection operation, and acquiring a second recorded song from the recorded songs of the second user account;
and acquiring the first recorded song from the recorded songs of the first user account according to the second recorded song.
Optionally, the first obtaining module is configured to:
displaying candidate recorded songs of the first user account according to the second recorded songs, wherein the candidate recorded songs and the second recorded songs comprise the same song accompaniment;
and responding to a selection instruction triggered by a third selection operation, and acquiring the first recorded song from the candidate recorded songs.
Optionally, the first obtaining module is configured to:
displaying a recorded song square interface, wherein the recorded song square interface comprises at least two publicly recorded songs, and the publicly recorded songs comprise the same song accompaniment;
and responding to a selection instruction triggered by a fourth selection operation, and acquiring the first recorded song and the second recorded song from the at least two public recorded songs.
Optionally, the second obtaining module is configured to:
responding to a selection instruction triggered by a fifth selection operation, and acquiring the first recorded song segment from the first recorded song;
the second obtaining module is further configured to:
and responding to a selection instruction triggered by a sixth selection operation, and acquiring the second recorded song segment from the second recorded song.
Optionally, the generating module is configured to:
when the first recorded song segment and the second recorded song segment have recorded song segments with the same lyrics, superposing the human voice in the recorded song segments with the same lyrics to obtain superposed segments;
and generating a chorus song according to the superposed segments and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment.
According to yet another aspect of the present application, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the song synthesis method of the above aspect.
According to yet another aspect of the present application, there is provided a computer storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by the processor to implement the song synthesizing method of the above aspect.
The beneficial effect that technical scheme that this application provided brought includes at least:
the method comprises the steps of obtaining a first recorded song segment according to a first recorded song of at least two recorded songs, obtaining a second recorded song segment according to a second recorded song of the at least two recorded songs, and then generating a chorus song according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs to be synthesized aiming at each selected recorded song to synthesize the chorus songs. An efficient and flexible synthesis of chorus songs is achieved. The present application provides a new way of synthesizing chorus songs.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of implementing multi-user chorus according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a user interface for a "song plaza" provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of a song synthesizing system provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of a song synthesizing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another song synthesizing method provided in the embodiment of the present application;
fig. 6 is a schematic flowchart of a method for acquiring a recorded song according to an embodiment of the present application;
fig. 7 is a schematic diagram of a user interface for displaying details of a first recorded song provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a user interface for displaying recommended recorded songs provided by an embodiment of the application;
fig. 9 is a schematic flowchart of another method for acquiring a recorded song according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a user interface for displaying recorded songs from a second user account as provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of a user interface for selecting a chorus mode according to an embodiment of the present application;
fig. 12 is a flowchart illustrating a method for obtaining a first recorded song from songs recorded in a first user account according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a user interface for selecting a first recorded song from among candidate recorded songs provided by an embodiment of the present application;
fig. 14 is a schematic flowchart of another method for acquiring a recorded song according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a user interface for displaying a recorded song for completing a multi-user chorus according to an embodiment of the present application;
FIG. 16 is a schematic diagram of a user interface for a singing client to randomly obtain song segments according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a user interface for selecting a song clip provided by an embodiment of the application;
fig. 18 is a schematic structural diagram of a song synthesizing apparatus according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of a terminal according to an embodiment of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of implementing multi-user chorus according to an embodiment of the present disclosure. As shown in fig. 1:
in step S1, the singing client acquires at least two recorded songs. Optionally, the singing client obtains a first recorded song of a first user and recommends a recorded song of a second user to the first user that includes the same song accompaniment based on the first recorded song. Wherein the first user is different from the second user.
And when the singing client receives a selection instruction for selecting a second recorded song from the recommended recorded songs, acquiring the second recorded song.
Or the singing client acquires a second recorded song of the second user and acquires a first recorded song of the first user with the same song accompaniment as the second recorded song according to a selection instruction of the second recorded song when the first user views the recorded song of the second user.
Or the singing client side obtains a first recorded song of the first user and a second recorded song of the second user with the same song accompaniment according to a selection instruction of the third user for selecting at least two recorded songs in the song plaza.
The first user, the second user and the third user are any one of the singing clients, and the first user, the second user and the third user are different.
Illustratively, fig. 2 is a schematic diagram of a user interface of "song plaza" provided by an embodiment of the present application. As shown in fig. 2, recorded song information 201 of the published user account 1, recorded song information 202 of the user account 2, and recorded song information 203 of the user account 3 are displayed in the "song plaza". The recorded song information 201 of the user account 1 includes the recorded song name of the user account 1 and the account name of the user account 1. The recorded song information 202 of the user account 2 includes the recorded song name of the user account 2 and the account name of the user account 2. The recorded song information 203 of the user account 3 includes the recorded song name of the user account 3 and the account name of the user account 3. The recorded song of user account 1, the recorded song of user account 2, and the recorded song of user account 3 are recorded songs including the same song accompaniment, which is song 1. Optionally, the singing client further displays the chorus times of the recorded song and the recorded songs of other user accounts for the recorded song of each user account. When the singing client receives a click operation for the start chorus button 204, at least two songs are obtained from the distributed recorded songs.
In step S2, the singing client obtains a first recorded song segment from the first recorded song and the singing client obtains a second recorded song segment from the second recorded song.
In step S3, the singing client generates a chorus song according to the first recorded song segment and the second recorded song segment, thereby completing the multi-user chorus. In the process of generating the chorus songs, the singing client acquires at least two recorded songs, and combines the recorded song segments to generate one chorus song according to the recorded song segments in the at least two recorded songs. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs to be synthesized aiming at each selected recorded song to synthesize the chorus songs. An efficient and flexible synthesis of chorus songs is achieved. The embodiment of the application provides a novel method for synthesizing chorus songs.
Fig. 3 is a schematic structural diagram of a song synthesizing system according to an embodiment of the present application, and as shown in fig. 3, the system includes: a server 310, a first terminal 320, and a second terminal 330.
Optionally, the server 310 is a server, or a server cluster composed of several servers, or a cloud computing service center, and the like, which is not limited herein. The first terminal 320 is a terminal device including a microphone, such as a smart phone, a tablet computer, a desktop computer, a notebook computer, and the like. The second terminal 330 is a terminal device including a microphone, such as a smart phone, a tablet computer, a desktop computer, a notebook computer, and the like. The server 310 and the first terminal 320 establish a connection through a wired network or a wireless network, and the server 310 and the second terminal 330 establish a connection through a wired network or a wireless network. The number of terminals establishing connection with the server 310 in the song composition system in fig. 3 is merely used as an illustration, and is not a limitation of the song composition system provided in the embodiment of the present application. As shown in fig. 3, in the embodiment of the present application, a first terminal 320 is taken as a smart phone, and a second terminal 330 is taken as a smart phone for example.
It should be noted that, a singing client is installed on the first terminal 320, the first terminal 320 is connected to the server 310 through the singing client, and the server 310 is a server corresponding to the singing client. The second terminal 330 is installed with a singing client, the second terminal 330 is connected with the server 310 through the singing client, and the server 310 is a server corresponding to the singing client. Wherein the singing client on the first terminal 320 is the same as the singing client on the second terminal 330.
Fig. 4 is a schematic flowchart of a song synthesizing method according to an embodiment of the present application. The method may be used for a singing client on any terminal in a song composition system as shown in fig. 3. As shown in fig. 4, the method includes:
step 401, at least two recorded songs are obtained, where the at least two recorded songs include a first recorded song in a first user account and a second recorded song in a second user account.
And the singing client generates a chorus song according to the at least two recorded songs. Optionally, the client acquires a first recorded song of the first user account and a second recorded song of the second user account according to a selection operation of a user logging in the first user account. Namely, the user who logs in the first user account selects the recorded songs of other user accounts to sing with the recorded songs of the first user account. Or the client acquires a first recorded song of the first user account and a second recorded song of the second user account according to the selection operation of the user logging in other user accounts. Namely, the user who logs in other user accounts selects the recorded song of the first user account and the recorded song of the second user account to sing jointly. The first user account is any user account in the singing client, the second user account is any user account in the singing client, and other user accounts are any user accounts in the singing client. The first user account is different from the second user account. The other user accounts are different from the first user account and the second user account.
Optionally, the singing client acquires the at least two recorded songs according to the identifications of the at least two recorded songs. The identification of the recorded song is used to identify the recorded song. Optionally, the identifier of the recorded song includes information that can uniquely identify the recorded song, such as a name of the recorded song or a serial number of the recorded song in the server.
Optionally, the singing client may further obtain three recorded songs or four recorded songs, which is not limited herein. Optionally, the first recorded song includes a first accompaniment identifier and the second recorded song includes a second accompaniment identifier. The accompaniment identification is used to identify a song accompaniment in the recorded song. The first recorded song and the second recorded song comprise the same song accompaniment, namely the first accompaniment identifier is the same as the second accompaniment identifier. Optionally, the accompaniment identifier includes information that can uniquely identify the song accompaniment, such as a name of the song accompaniment or a serial number of the song accompaniment in the server.
The first recorded song of the first user account refers to a recorded song uploaded by a user who logs in the first user account in the singing client. Optionally, the first recorded song includes an identifier of the first user account, where the identifier includes an account name of the first user account. The second recorded song of the second user account refers to the recorded song uploaded by the user logging in the second user account in the singing client. Optionally, the second recorded song includes an identifier of the second user account, the identifier including an account name of the second user account.
Step 402, obtaining a first recorded song segment according to a first recorded song, wherein the first recorded song segment is a song segment in the first recorded song; and acquiring a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song.
The first recorded song segment comprises a recorded song segment corresponding to a first target time period in the first recorded song, a recorded song segment corresponding to a first target lyric in the first recorded song, and/or a randomly determined recorded song segment in the first recorded song by the singing client. The first target time period includes a start time stamp and an end time stamp of the first recorded song segment in the first recorded song. Optionally, the singing client determines the first target time period from a selection operation of the start timestamp and the end timestamp in the first recorded song by the user logging in the first user account. And the singing client determines a first target lyric according to the selection operation of the lyric in the first recorded song by the user logging in the first user account. The first target lyrics include one or more words of lyrics in the first recorded song.
Illustratively, the duration of the first recorded song is 05:00:00 (representing 5 minutes), the start timestamp of the first recorded song segment in the first recorded song is 01:00:00, the end timestamp of the first recorded song segment in the first recorded song is 03:00:00, and then the first recorded song segment is a recorded song segment of a portion of the first recorded song beginning from 01:00:00 to 03:00: 00.
The second recorded song segment comprises a recorded song segment corresponding to a second target time period in the second recorded song, a recorded song segment corresponding to a second target lyric in the second recorded song, and/or a randomly determined recorded song segment in the second recorded song by the singing client. The second target time period includes a start time stamp and an end time stamp of the second recorded song segment in the second recorded song. Optionally, the singing client determines a second target time period from the selection of the start timestamp and the end timestamp in the second recorded song by the user logged into the first user account. And the singing client determines second target lyrics in the second recorded song according to the selection operation of the lyrics of the user logging in the first user account. The second target lyrics may include one or more words of lyrics in the second recorded song. Optionally, in the first recorded song segment and the second recorded song segment, there is a recorded song segment including the same lyrics.
Step 403, generating a chorus song according to the first recorded song segment and the second recorded song segment.
When the first recorded song segment and the second recorded song segment include different lyrics. And the singing client generates a chorus song according to the time sequence of the lyrics in the recorded song fragments and the first recorded song fragment and the second recorded song fragment.
When the first recorded song segment and the second recorded song segment include the same lyrics. Optionally, the singing client superimposes the voice in the recorded song segments with the same lyrics in the first recorded song segment and the second recorded song segment to obtain a superimposed segment, and generates a chorus song according to the time sequence of the lyrics in the recorded song segments according to the superimposed segment and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment.
Illustratively, the first recorded song segment includes lyrics a, b, and c, and the second recorded song segment includes lyrics b, c, and d. And when the singing client combines the first recorded song segment with the second recorded song segment to generate a chorus song, overlapping the recorded song segments corresponding to the lyrics b and c in the first recorded song segment with the voices of the recorded song segments corresponding to the lyrics b and c in the second recorded song segment to obtain overlapped segments. And combining the recorded song segment corresponding to the lyric a in the first recorded song segment, the superposed segment and the recorded song segment corresponding to the lyric d in the second recorded song segment to generate a chorus song.
In summary, according to the song synthesis method provided by the embodiment of the present application, a first recorded song segment is obtained according to a first recorded song of at least two recorded songs, a second recorded song segment is obtained according to a second recorded song of the at least two recorded songs, and then a chorus song is generated according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs to be synthesized aiming at each selected recorded song to synthesize the chorus songs. An efficient and flexible synthesis of chorus songs is achieved. The embodiment of the application provides a novel method for synthesizing chorus songs.
Fig. 5 is a schematic flowchart of another song synthesis method provided in an embodiment of the present application. The method may be used for a singing client on any terminal in a song composition system as shown in fig. 3. As shown in fig. 5, the method includes:
step 501, at least two recorded songs are obtained, wherein the at least two recorded songs include a first recorded song of a first user account and a second recorded song of a second user account.
The first user account is any user account in the singing client. The second user account is any user account in the singing client. The first user account is different from the second user account. The first recorded song is any one of recorded songs in the first user account. The second recorded song is any one of recorded songs in the second user account. The first recorded song and the second recorded song include the same song accompaniment.
In one possible implementation, as shown in fig. 6, the implementation of step 501 includes the following steps 5011a to 5011 c:
in step 5011a, a first recorded song is obtained.
Optionally, the singing client acquires the first recorded song when the singing client displays a user interface of the details of the first recorded song in a user interface of a user logging in the first user account. Or when the singing client receives a recording song identifier of at least one recommended recording song recommended according to the first recording song, which is sent by the server, the singing client acquires the first recording song. And the singing client acquires the first recorded song from the server according to the recorded song identification of the first recorded song.
In step 5011b, at least one recommended recorded song recommended according to the first recorded song is displayed, the recommended recorded song includes a recorded song of the second user account, and the recommended recorded song includes the same song accompaniment as the first recorded song.
Optionally, the server acquires information of the recorded songs of the other user accounts including the accompaniment identifier according to the accompaniment identifier of the first recorded song acquired by the singing client, generates recommendation information, and then transmits the recommendation information to the singing client. And the singing client receives the recommendation information sent by the server and displays at least one recommended recording song recommended according to the first recording song in a user interface of a user logging in the first user account according to the recommendation information. The recommendation information comprises information of other user accounts and information of songs recorded by the other user accounts. The information of the other user accounts comprises account names of the other user accounts, and the information of the songs recorded in the other user accounts comprises names of the recorded songs, song recording identifiers of the recorded songs, and/or recording time of the recorded songs.
Illustratively, fig. 7 is a schematic diagram of a user interface for displaying details of a first recorded song provided in an embodiment of the present application. As shown in fig. 7, the user interface includes a song name 701 of the first recorded song, i.e., a song accompaniment name of the first recorded song. Account information 702 for the first user account. The account information 702 for the first user account includes the account name of the first user account and the avatar of the first user account. The "find chorus" button 703. And lyrics of the first recorded song, playing progress information of the first recorded song, and a playing control button of the first recorded song. And when the singing client detects the clicking operation aiming at the 'find chorus' button, displaying at least one recommended recorded song recommended according to the first recorded song. Optionally, the singing client displays at least one recommended recorded song recommended according to the first recorded song in the current user interface, or the singing client displays at least one recommended recorded song recommended according to the first recorded song in the user interface displaying the recommended recorded song.
Illustratively, fig. 8 is a schematic diagram of a user interface for displaying recommended recorded songs provided by an embodiment of the present application. As shown in fig. 8, the user interface includes a song title of the recommended recorded song, i.e., a song accompaniment title of the recommended recorded song. Recommended recorded song information 801 of user account 1, recommended recorded song information 802 of user account 2, and recommended recorded song information 803 of user account 3. The recommended recorded song information comprises account names of other user accounts, recorded song names of recommended recorded songs and the frequency of chorusing the recommended recorded songs by the user accounts corresponding to the recommended recorded songs. Optionally, the singing client displays all recommended recorded songs according to the recommendation information sent by the server. Or displaying the recommended recorded song of the user account establishing the friend relationship with the first user account.
In step 5011c, in response to the selection instruction triggered by the first selection operation, a second recorded song is obtained from at least one recommended recorded song.
The first selection operation refers to the selection operation of the user logging in the first user account aiming at recommending the recorded song. Optionally, the selection instruction triggered by the first selection operation includes a recorded song identifier of the second recorded song, and the singing client acquires the second recorded song from the server according to the recorded song identifier. Optionally, the singing client may further be configured to obtain a plurality of recorded songs from the at least one recommended recorded song, where the plurality of recorded songs correspond to user accounts different from the first user account and the second user account.
Illustratively, with continued reference to fig. 8, the recommended recorded song information in the user interface also includes a selection box for the recommended recorded song and a chorus button 803. When the selection box of the recommended recorded song of the user account 1 is in the selected state and the singing client detects the click operation aiming at the chorus button 804, the singing client acquires a second recorded song from the server according to the recorded song identification of the recommended recorded song of the user account 1.
In another possible implementation, as shown in fig. 9, the implementation of step 501 includes the following steps 5012a to 5012 c:
in step 5012a, the recorded song for the second user account is displayed.
Optionally, the recorded song of the second user account displayed by the singing client is any recorded song of the second user account. The singing client displays the recorded song of the second user account in the user interface of the user logging in the first user account. Optionally, when the singing client receives a selection operation of the user logging in the first user account for the recorded song of the second user account, the recorded song of the second user account is displayed. That is, the user who logs in the first user account views the recorded song of the second user account.
Fig. 10 is a schematic diagram of a user interface for displaying recorded songs in a second user account according to an embodiment of the present application. As shown in fig. 10, the song name 1001 of the recorded song of the second user account is included in the user interface. Account information 1002 for the second user account. The account information 1002 of the second user account includes the account name of the second user account and the avatar of the second user account. And lyrics of the recorded song of the second user account, playing progress information of the recorded song of the second user account and a playing control button of the recorded song of the second user account.
In step 5012b, in response to the selection instruction triggered by the second selection operation, a second recorded song is obtained from the recorded songs in the second user account.
The second selection operation refers to the selection operation of the user who logs in the first user account for the recorded song of the second user account. Optionally, the selection instruction triggered by the second selection operation includes a recorded song identifier of the second recorded song, and the singing client acquires the second recorded song from the server according to the recorded song identifier.
Illustratively, with continued reference to FIG. 10, a "chorus with user 2" button 1003 is also included in the user interface. And when the singing client detects the click operation aiming at the 'chorus with user 2' button, acquiring a second recorded song from the server according to the recorded song identification of the second recorded song in the selection instruction triggered by the click operation.
Optionally, after the singing client acquires the second recorded song from the recorded songs in the second user account according to a selection instruction triggered by the second selection operation, a user interface for selecting a chorus mode is also displayed. For example, fig. 11 is a schematic diagram of a user interface for selecting a chorus mode provided in an embodiment of the present application. As shown in fig. 11, the user interface includes a chorus mode selection popup 1101, where the chorus mode selection popup 1101 includes a "select existing works chorus" option 1102, that is, a user logging in a first user account selects a first recorded song and a second recorded song to sing from recorded songs in the first user account. The chorus selection popup 1101 also includes a "re-record chorus" option 1103, i.e., a user logging in to a first user account re-records a first recorded song according to a song accompaniment of a second recorded song and sings with the second recorded song.
In step 5012c, a first recorded song is obtained from the recorded songs in the first user account based on the second recorded song.
Alternatively, as shown in fig. 12, the implementation process of step 5012c includes the following steps 51a and 51 b:
in step 51a, according to the second recorded song, candidate recorded songs of the first user account are displayed, wherein the candidate recorded songs and the second recorded song comprise the same song accompaniment.
Optionally, the candidate recorded songs for the first user account include any recorded song for the first user account. And displaying the candidate recorded songs of the first user account according to the second recorded songs, namely displaying the recorded songs of the first user account with the same song accompaniment by the singing client according to the song accompaniment of the second recorded songs.
Illustratively, with continued reference to fig. 11, candidate recorded songs for the first user account are displayed in accordance with the second recorded song. Namely, after the option of 'selecting the existing works chorus' is triggered, the candidate recorded songs of the first user account are displayed.
In step 51b, in response to a selection instruction triggered by the third selection operation, the first recorded song is obtained from the candidate recorded songs.
The third selection operation refers to the selection operation of the user logging in the first user account on the candidate recorded song. Optionally, the selection instruction triggered by the third selection operation includes a recorded song identifier of the first recorded song, and the singing client acquires the first recorded song from the server according to the recorded song identifier.
For example, fig. 13 is a schematic diagram of a user interface for selecting a first recorded song from candidate recorded songs provided by an embodiment of the present application. As shown in fig. 13, the user interface includes a recorded song selection pop-up window 1301, and the recorded song selection pop-up window 1301 includes first candidate recorded song information 1302 and second candidate recorded song information 1303 of the first user account. The candidate recorded song information includes the song accompaniment of the candidate recorded song and the uploading time of the candidate recorded song. When the singing client detects the click operation aiming at the candidate recorded song information, the singing client acquires a first recorded song from the server according to the recorded song identification in the selection instruction triggered by the click operation.
In yet another possible implementation, as shown in fig. 14, the implementation process of step 501 includes the following steps 5013a and 5012 b:
in step 5013a, a recorded song plaza interface is displayed, where the recorded song plaza interface includes at least two publicly recorded songs, and the publicly recorded songs include the same song accompaniment.
Optionally, the publically recorded song refers to a user who logs in any user account in the singing client and issues the user to a recorded song in the recorded song plaza. Optionally, the publicly recorded song includes any recorded song in the singing client.
For example, a recording song plaza interface displayed by the singing client may refer to fig. 2, and the embodiment of the present application is not described herein again.
In step 5012b, in response to the selection instruction triggered by the fourth selection operation, the first recorded song and the second recorded song are obtained from the at least two public recorded songs.
The fourth selection operation refers to the selection operation of the user who logs in any user account for disclosing the recorded song. Optionally, the selection instruction triggered by the fourth selection operation includes a recorded song identifier of the first recorded song, a recorded song identifier of the second recorded song, and/or a recorded song identifier of another recorded song. Wherein, the user account corresponding to each recorded song is different. And the singing client acquires the first recorded song, the second recorded song and/or other recorded songs from the server according to the recorded song identification.
Optionally, the recorded songs in this embodiment of the present application further include recorded songs for which a multi-user chorus has been completed. Illustratively, the song recorded in user account 1 is a1, the song recorded in user account 2 is b2, and the song recorded in user account 3 is c 3. Wherein recorded song a1, recorded song b2, and recorded song c3 include the same song accompaniment. The user logging in user account 1 selects recorded song b2 of user account 2 to sing with his own recorded song a1 to obtain recorded song a 2. The user logged into user account 4 may select recorded song a2 to sing with recorded song c3 of user account 3 among the publicly recorded songs.
Illustratively, fig. 15 is a schematic diagram of a user interface displaying recorded songs that complete a multi-user chorus provided in an embodiment of the present application. As shown in fig. 15, the user interface includes a song name 1501 of the recorded song that completes the multi-user chorus. Chorus user account information 1502. The chorus user account information 1502 includes an account name of a first chorus user account, an avatar of the first chorus user account, an account name of a second chorus user account, and an avatar of the second chorus user account. The "chorus with them" button 1503. And the song lyrics of the recorded songs finished the chorus of multiple users, the playing progress information of the recorded songs finished the chorus of multiple users and the playing control buttons of the recorded songs finished the chorus of multiple users. When the singing client detects a click operation for the "chorus with them" button, multi-user chorus can be performed according to the method in the above-described steps 5012a to 5012 c.
Step 502, in response to a selection instruction triggered by a fifth selection operation, acquiring a first recorded song segment from the first recorded song.
The fifth selection operation refers to a selection operation for a song clip in the first recorded song. Optionally, the selection instruction triggered by the fifth selection operation includes a first target time period or a first target lyric identifier. The first target time period includes a start time stamp and an end time stamp of the first recorded song segment in the first recorded song. The first target lyrics identify a segment for identifying that the first recorded song segment includes in the first recorded song. And the singing client acquires a first recorded song segment from the first recorded song according to the first target time period or the first target lyric identification. And when the selection instruction triggered by the fifth selection operation does not comprise the first target time period and the first target lyric identification, the singing client randomly acquires a first recorded song segment from the first recorded song.
Illustratively, fig. 16 is a schematic diagram of a user interface for randomly acquiring song segments by a singing client according to an embodiment of the present application. As shown in fig. 16, the user interface includes a song title 1601 of the first recorded song of user account 1, i.e., a song accompaniment of the first recorded song. User account information 1602 of user account 1, user account information 1603 of user account 2, and lyric fragments corresponding to the user account. The user account information 1602 of the user account 1 includes the account name of the user account 1 and the avatar of the user account 1, and the user account information 1603 of the user account 2 includes the account name of the user account 2 and the avatar of the user account 2. The song segments randomly acquired by the singing client from the first recorded song in the user account 1 comprise song segments corresponding to the lyrics 1, 2 and 3.
And step 503, in response to the selection instruction triggered by the sixth selection operation, acquiring a second recorded song segment from the second recorded song.
The sixth selection operation refers to a selection operation for a song clip in the second recorded song. Optionally, the selection instruction triggered by the sixth selection operation includes a second target time period or a second target lyric identifier. The second target time period includes a start time stamp and an end time stamp of the second recorded song segment in the second recorded song. The second target lyrics identify a segment for identifying that the second recorded song segment is included in the second recorded song. And the singing client acquires a second recorded song segment from the second recorded song according to the second target time period or the second target lyric identification. And when the selection instruction triggered by the sixth selection operation does not comprise the second target time period and the second target lyric identification, the singing client randomly acquires a second recorded song segment from the second recorded song.
Illustratively, with continued reference to FIG. 16, the song segments randomly acquired by the singing client from the second recorded song in user account 2 include lyrics 4, lyrics 5, and song segments corresponding to lyrics 6.
Illustratively, fig. 17 is a schematic diagram of a user interface for selecting a song clip by a user according to an embodiment of the present application. As shown in fig. 17, the user interface includes a song title 1601 of the first recorded song, i.e., a song accompaniment of the first recorded song. An "i sing" indicator 1702 and a "chorus" indicator 1703, a decision button 1704, lyrics of the first recorded song, and a selection box corresponding to each lyric. When the selection box under the "i sing" mark 1702 is selected, it means that the lyrics corresponding to the horizontal direction of the selection box belong to the first recorded song segment and do not belong to the second recorded song segment. When the selection box under the "chorus" mark 1703 is selected, it means that the lyrics corresponding to the horizontal direction of the selection box belong to both the first recorded song segment and the second recorded song segment. And when both the two selection boxes in the horizontal direction of a certain word are not selected, the lyric does not belong to the first recorded song segment and the second recorded song segment. And when the singing client detects clicking operation aiming at the determined button, acquiring a first recorded song segment from the first recorded song and a second recorded song segment from the second recorded song according to the selected lyrics.
Step 504, a chorus song is generated according to the first recorded song segment and the second recorded song segment.
When the first recorded song segment and the second recorded song segment include different lyrics. And the singing client generates a chorus song according to the time sequence of the lyrics in the recorded song fragments and the first recorded song fragment and the second recorded song fragment.
When a recorded song segment with the same lyrics exists in the first recorded song segment and the second recorded song segment. Optionally, the singing client superimposes the human voice in the recorded song segment with the same lyrics in the first recorded song segment and the second recorded song segment to obtain a superimposed segment. And generating a chorus song according to the superposed segments and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment and the time sequence of the lyrics in the recorded song segments.
For example, a male user cannot quickly invite a female user to sing after uploading a male voice portion of a first recorded song that the male and female sing-and-sing through a first user account. The song synthesis method provided by the embodiment of the application can realize that: the male user searches the second recorded song of the second user account of the female part of the first recorded song which is uploaded to actively synthesize, and does not need to wait for other users to synthesize.
Illustratively, a user logging into the first user account, while listening to a second recorded song in the second user account, finds that the first user account has a recorded song that includes the same song accompaniment as the second recorded song. The song synthesis method provided by the embodiment of the application can realize that: the user who logs in the first user account selects the first recorded song in the first user account, selects the first recorded song segment from the first recorded song and sings the second recorded song, and the user who does not need to log in the first user account does not need to record again.
Optionally, the step of generating a chorus song according to the first recorded song segment and the second recorded song segment can be further executed by the server. And after generating the chorus song, the server sends the chorus song to the singing client.
Optionally, recording a song in this embodiment refers to recording a song audio or recording a song video. When the recorded song in the embodiment of the present application is a recorded song video, the recorded song segment includes an audio segment and a video image segment of the recorded song, and the video image segment is a video image segment synchronized with the audio segment. Optionally, when the singing client synthesizes the recorded song segments, synthesizing the video frame segments of the recorded song segments is further included. And the singing client vertically or longitudinally arranges all the video image fragments and combines the video image fragments into one video image.
In summary, according to the song synthesis method provided by the embodiment of the present application, a first recorded song segment is obtained according to a first recorded song of at least two recorded songs, a second recorded song segment is obtained according to a second recorded song of the at least two recorded songs, and then a chorus song is generated according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs to be synthesized aiming at each selected recorded song to synthesize the chorus songs. An efficient and flexible synthesis of chorus songs is achieved. The embodiment of the application provides a novel method for synthesizing chorus songs.
Optionally, the user can select the recommended recorded song according to the recorded song and perform chorus with the recorded song, so that the difficulty of searching the chorus song by the user is reduced.
Optionally, when the user views the recorded songs of other users, the user can select the recorded song of the user to sing with the recorded songs of the other users, so that the operation of the user is simplified.
Optionally, the user can select at least two songs to sing jointly in the recording song plaza interface, so that the user experience is improved.
In addition, the user can also chorus with the recorded songs that have been chorus by multiple users, thereby improving the interest of synthesizing chorus songs.
It should be noted that, the order of the steps of the song synthesis method provided in the embodiment of the present application may be appropriately adjusted, and the steps may also be increased or decreased according to the circumstances, and any method that can be easily conceived by those skilled in the art within the technical scope disclosed in the present application should be included in the protection scope of the present application, and therefore, the details are not described again.
Fig. 18 is a schematic structural diagram of a song synthesizing apparatus according to an embodiment of the present application. The apparatus may be used for a singing client on any terminal in a song composition system as shown in fig. 3. As shown in fig. 18, the apparatus 180 includes:
the first obtaining module 1801 is configured to obtain at least two recorded songs, where the at least two recorded songs include a first recorded song in a first user account and a second recorded song in a second user account, the first recorded song and the second recorded song include the same song accompaniment, and the first user account is different from the second user account.
A second obtaining module 1802, configured to obtain a first recorded song segment according to the first recorded song, where the first recorded song segment is a song segment in the first recorded song; and the second obtaining module is further used for obtaining a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song.
A generating module 1803, configured to generate a chorus song according to the first recorded song segment and the second recorded song segment.
In summary, the song synthesizing apparatus provided in the embodiment of the present application obtains the first recorded song segment through the second obtaining module according to the first recorded song of the at least two recorded songs, obtains the second recorded song segment through the second obtaining module according to the second recorded song of the at least two recorded songs, and then generates a chorus song through the generating module according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs to be synthesized aiming at each selected recorded song to synthesize the chorus songs. An efficient and flexible synthesis of chorus songs is achieved. The embodiment of the application provides a novel method for synthesizing chorus songs.
Optionally, the first obtaining module 1801 is configured to:
a first recorded song is obtained. And displaying at least one recommended recording song recommended according to the first recording song, wherein the recommended recording song comprises a recording song of the second user account, and the recommended recording song comprises the same song accompaniment as the first recording song. And acquiring a second recorded song from at least one recommended recorded song in response to a selection instruction triggered by the first selection operation.
Optionally, the first obtaining module 1801 is configured to:
and displaying the recorded song of the second user account. And responding to a selection instruction triggered by the second selection operation, and acquiring a second recorded song from the recorded songs in the second user account. And acquiring the first recorded song from the recorded songs of the first user account according to the second recorded song.
Optionally, the first obtaining module 1801 is configured to:
and displaying candidate recorded songs of the first user account according to the second recorded songs, wherein the candidate recorded songs and the second recorded songs comprise the same song accompaniment. And acquiring the first recorded song from the candidate recorded songs in response to a selection instruction triggered by the third selection operation.
Optionally, the first obtaining module 1801 is configured to:
and displaying a recorded song square interface, wherein the recorded song square interface comprises at least two publicly recorded songs, and the publicly recorded songs comprise the same song accompaniment. And responding to a selection instruction triggered by the fourth selection operation, and acquiring a first recorded song and a second recorded song from at least two public recorded songs.
Optionally, the second obtaining module 1802 is configured to:
and acquiring a first recorded song segment from the first recorded song in response to a selection instruction triggered by the fifth selection operation.
A second obtaining module 1802, further configured to:
and acquiring a second recorded song segment from the second recorded song in response to a selection instruction triggered by the sixth selection operation.
Optionally, the generating module 1803 is configured to:
when the recorded song segments with the same lyrics exist in the first recorded song segment and the second recorded song segment, the human voice in the recorded song segments with the same lyrics are superposed to obtain a superposed segment. And generating a chorus song according to the superposed segments and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment.
In summary, the song synthesizing apparatus provided in the embodiment of the present application obtains the first recorded song segment through the second obtaining module according to the first recorded song of the at least two recorded songs, obtains the second recorded song segment through the second obtaining module according to the second recorded song of the at least two recorded songs, and then generates a chorus song through the generating module according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs to be synthesized aiming at each selected recorded song to synthesize the chorus songs. An efficient and flexible synthesis of chorus songs is achieved. The embodiment of the application provides a novel method for synthesizing chorus songs.
Optionally, the user can select the recommended recorded song according to the recorded song and perform chorus with the recorded song, so that the difficulty of searching the chorus song by the user is reduced.
Optionally, when the user views the recorded songs of other users, the user can select the recorded song of the user to sing with the recorded songs of the other users, so that the operation of the user is simplified.
Optionally, the user can select at least two songs to sing jointly in the recording song plaza interface, so that the user experience is improved.
In addition, the user can also chorus with the recorded songs that have been chorus by multiple users, thereby improving the interest of synthesizing chorus songs.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the modules described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Embodiments of the present application further provide a computer device, including: a processor and a memory, the device memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement the song synthesis method provided by the above-described method embodiments.
The computer device may be a terminal. Illustratively, fig. 19 is a schematic structural diagram of a terminal provided in an embodiment of the present application.
Generally, terminal 1900 includes: a processor 1901 and a memory 1902.
The processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1901 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1902 may include one or more computer-readable storage media, which may be non-transitory. The memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1902 is used to store at least one instruction for execution by processor 1901 to implement a song composition method as provided by method embodiments herein.
In some embodiments, terminal 1900 may further optionally include: a peripheral interface 1903 and at least one peripheral. The processor 1901, memory 1902, and peripheral interface 1903 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 1903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1904, a display screen 1905, a camera assembly 1906, an audio circuit 1907, a positioning assembly 1908, and a power supply 1909.
The peripheral interface 1903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to the processor 1901 and the memory 1902. In some embodiments, the processor 1901, memory 1902, and peripherals interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1901, the memory 1902 and the peripheral device interface 1903 may be implemented on a single chip or circuit board, which is not limited in this application.
The Radio Frequency circuit 1904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1904 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1905 is a touch display screen, the display screen 1905 also has the ability to capture touch signals on or above the surface of the display screen 1905. The touch signal may be input to the processor 1901 as a control signal for processing. At this point, the display 1905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1905 may be one, providing the front panel of terminal 1900; in other embodiments, the displays 1905 can be at least two, each disposed on a different surface of the terminal 1900 or in a folded design; in still other embodiments, display 1905 can be a flexible display disposed on a curved surface or on a folding surface of terminal 1900. Even more, the display 1905 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1905 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1906 is used to capture images or video. Optionally, camera assembly 1906 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal 1900 and the rear camera is disposed on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera head assembly 1906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1901 for processing, or inputting the electric signals into the radio frequency circuit 1904 for realizing voice communication. The microphones may be provided in a plurality, respectively, at different locations of the terminal 1900 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuitry 1904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1907 may also include a headphone jack.
The positioning component 1908 is configured to locate a current geographic location of the terminal 1900 for navigation or LBS (location based Service). The positioning component 1908 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1909 is used to provide power to the various components in terminal 1900. The power source 1909 can be alternating current, direct current, disposable batteries, or rechargeable batteries. When power supply 1909 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: acceleration sensor 1911, gyro sensor 1912, pressure sensor 1913, fingerprint sensor 1914, optical sensor 1915, and proximity sensor 1916.
Acceleration sensor 1911 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with terminal 1900. For example, the acceleration sensor 1911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1901 may control the touch screen 1905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1911. The acceleration sensor 1911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1912 may detect a body direction and a rotation angle of the terminal 1900, and the gyro sensor 1912 may collect a 3D motion of the user on the terminal 1900 in cooperation with the acceleration sensor 1911. From the data collected by the gyro sensor 1912, the processor 1901 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1913 may be disposed on a side bezel of terminal 1900 and/or on a lower layer of touch display 1905. When the pressure sensor 1913 is disposed on the side frame of the terminal 1900, the user can detect a grip signal of the terminal 1900, and the processor 1901 can perform right-left hand recognition or shortcut operation based on the grip signal collected by the pressure sensor 1913. When the pressure sensor 1913 is disposed at the lower layer of the touch display 1905, the processor 1901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1914 is configured to collect a fingerprint of the user, and the processor 1901 identifies the user according to the fingerprint collected by the fingerprint sensor 1914, or the fingerprint sensor 1914 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1901 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. Fingerprint sensor 1914 may be disposed on a front, back, or side of terminal 1900. When a physical button or vendor Logo is provided on terminal 1900, fingerprint sensor 1914 may be integrated with the physical button or vendor Logo.
The optical sensor 1915 is used to collect the ambient light intensity. In one embodiment, the processor 1901 may control the display brightness of the touch screen 1905 based on the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1905 is turned down. In another embodiment, the processor 1901 may also dynamically adjust the shooting parameters of the camera assembly 1906 according to the intensity of the ambient light collected by the optical sensor 1915.
Proximity sensor 1916, also referred to as a distance sensor, is typically disposed on the front panel of terminal 1900. Proximity sensor 1916 is used to gather the distance between the user and the front face of terminal 1900. In one embodiment, when proximity sensor 1916 detects that the distance between the user and the front surface of terminal 1900 gradually decreases, processor 1901 controls touch display 1905 to switch from the bright screen state to the rest screen state; when the proximity sensor 1916 detects that the distance between the user and the front surface of the terminal 1900 gradually becomes larger, the processor 1901 controls the touch display 1905 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 19 is not intended to be limiting of terminal 1900 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Also provided in embodiments of the present application is a computer storage medium, where at least one instruction, at least one program, a code set, or a set of instructions may be stored in the storage medium, and the at least one instruction, at least one program, code set, or set of instructions is loaded and executed by a processor to implement the song synthesizing method provided in the above method embodiments.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an example of the present application and should not be taken as limiting, and any modifications, equivalent switches, improvements, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method of synthesizing a song, the method comprising:
acquiring at least two recorded songs, wherein the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account is different from the second user account;
acquiring a first recorded song segment according to the first recorded song, wherein the first recorded song segment is a song segment in the first recorded song; acquiring a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song;
and generating a chorus song according to the first recorded song segment and the second recorded song segment.
2. The method of claim 1, wherein obtaining at least two recorded songs comprises:
acquiring the first recorded song;
displaying at least one recommended recording song recommended according to the first recording song, wherein the recommended recording song comprises a recording song of the second user account, and the recommended recording song and the first recording song comprise the same song accompaniment;
and acquiring the second recorded song from the at least one recommended recorded song in response to a selection instruction triggered by the first selection operation.
3. The method of claim 1, wherein obtaining at least two recorded songs comprises:
displaying the recorded song of the second user account;
responding to a selection instruction triggered by a second selection operation, and acquiring a second recorded song from the recorded songs of the second user account;
and acquiring the first recorded song from the recorded songs of the first user account according to the second recorded song.
4. The method of claim 3, wherein obtaining the first recorded song from the recorded songs in the first user account based on the second recorded song comprises:
displaying candidate recorded songs of the first user account according to the second recorded songs, wherein the candidate recorded songs and the second recorded songs comprise the same song accompaniment;
and responding to a selection instruction triggered by a third selection operation, and acquiring the first recorded song from the candidate recorded songs.
5. The method of claim 1, wherein obtaining at least two recorded songs comprises:
displaying a recorded song square interface, wherein the recorded song square interface comprises at least two publicly recorded songs, and the publicly recorded songs comprise the same song accompaniment;
and responding to a selection instruction triggered by a fourth selection operation, and acquiring the first recorded song and the second recorded song from the at least two public recorded songs.
6. The method according to any one of claims 1 to 5,
the obtaining of the first recorded song segment includes:
responding to a selection instruction triggered by a fifth selection operation, and acquiring the first recorded song segment from the first recorded song;
the obtaining of the second recorded song segment includes:
and responding to a selection instruction triggered by a sixth selection operation, and acquiring the second recorded song segment from the second recorded song.
7. The method of any of claims 1 to 5, wherein generating a chorus song based on the first recorded song segment and the second recorded song segment comprises:
when the first recorded song segment and the second recorded song segment have recorded song segments with the same lyrics, superposing the human voice in the recorded song segments with the same lyrics to obtain superposed segments;
and generating a chorus song according to the superposed segments and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment.
8. A song synthesizing apparatus, characterized in that the apparatus comprises:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring at least two recorded songs, the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account is different from the second user account;
a second obtaining module, configured to obtain a first recorded song segment according to the first recorded song, where the first recorded song segment is a song segment in the first recorded song; the second acquisition module is further used for acquiring a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song;
and the generating module is used for generating a chorus song according to the first recorded song segment and the second recorded song segment.
9. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the song synthesis method of any one of claims 1 to 7.
10. A computer storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by a processor to implement a song synthesis method according to any one of claims 1 to 7.
CN202010442261.4A 2020-05-22 2020-05-22 Song synthesis method, device, equipment and storage medium Active CN111599328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010442261.4A CN111599328B (en) 2020-05-22 2020-05-22 Song synthesis method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010442261.4A CN111599328B (en) 2020-05-22 2020-05-22 Song synthesis method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111599328A true CN111599328A (en) 2020-08-28
CN111599328B CN111599328B (en) 2024-04-09

Family

ID=72192477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010442261.4A Active CN111599328B (en) 2020-05-22 2020-05-22 Song synthesis method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111599328B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014064066A (en) * 2012-09-19 2014-04-10 Sd Advisors Co Ltd Data generation method, data generation system, server unit for performing data generation, and program
CN106486128A (en) * 2016-09-27 2017-03-08 腾讯科技(深圳)有限公司 A kind of processing method and processing device of double-tone source audio data
US20170180288A1 (en) * 2015-12-17 2017-06-22 Facebook, Inc. Personal music compilation
CN108269560A (en) * 2017-01-04 2018-07-10 北京酷我科技有限公司 A kind of speech synthesizing method and system
CN109119057A (en) * 2018-08-30 2019-01-01 Oppo广东移动通信有限公司 Musical composition method, apparatus and storage medium and wearable device
CN110349559A (en) * 2019-07-12 2019-10-18 广州酷狗计算机科技有限公司 Carry out audio synthetic method, device, system, equipment and storage medium
CN110675848A (en) * 2019-09-30 2020-01-10 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method, device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014064066A (en) * 2012-09-19 2014-04-10 Sd Advisors Co Ltd Data generation method, data generation system, server unit for performing data generation, and program
US20170180288A1 (en) * 2015-12-17 2017-06-22 Facebook, Inc. Personal music compilation
CN106486128A (en) * 2016-09-27 2017-03-08 腾讯科技(深圳)有限公司 A kind of processing method and processing device of double-tone source audio data
CN108269560A (en) * 2017-01-04 2018-07-10 北京酷我科技有限公司 A kind of speech synthesizing method and system
CN109119057A (en) * 2018-08-30 2019-01-01 Oppo广东移动通信有限公司 Musical composition method, apparatus and storage medium and wearable device
CN110349559A (en) * 2019-07-12 2019-10-18 广州酷狗计算机科技有限公司 Carry out audio synthetic method, device, system, equipment and storage medium
CN110675848A (en) * 2019-09-30 2020-01-10 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method, device and storage medium

Also Published As

Publication number Publication date
CN111599328B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN110336960B (en) Video synthesis method, device, terminal and storage medium
CN108683927B (en) Anchor recommendation method and device and storage medium
CN109033335B (en) Audio recording method, device, terminal and storage medium
CN108538302B (en) Method and apparatus for synthesizing audio
CN110545476B (en) Video synthesis method and device, computer equipment and storage medium
CN108965757B (en) Video recording method, device, terminal and storage medium
CN110688082B (en) Method, device, equipment and storage medium for determining adjustment proportion information of volume
CN109192218B (en) Method and apparatus for audio processing
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN110266982B (en) Method and system for providing songs while recording video
CN109743461B (en) Audio data processing method, device, terminal and storage medium
CN111327928A (en) Song playing method, device and system and computer storage medium
CN110996167A (en) Method and device for adding subtitles in video
CN111402844B (en) Song chorus method, device and system
CN111711838B (en) Video switching method, device, terminal, server and storage medium
CN111276122A (en) Audio generation method and device and storage medium
CN111092991B (en) Lyric display method and device and computer storage medium
CN108055349B (en) Method, device and system for recommending K song audio
CN111064657B (en) Method, device and system for grouping concerned accounts
CN113204672A (en) Resource display method and device, computer equipment and medium
CN112069350A (en) Song recommendation method, device, equipment and computer storage medium
CN111294626A (en) Lyric display method and device
CN108806730B (en) Audio processing method, device and computer readable storage medium
CN111246233B (en) Video live broadcast method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant