CN111599328B - Song synthesis method, device, equipment and storage medium - Google Patents

Song synthesis method, device, equipment and storage medium Download PDF

Info

Publication number
CN111599328B
CN111599328B CN202010442261.4A CN202010442261A CN111599328B CN 111599328 B CN111599328 B CN 111599328B CN 202010442261 A CN202010442261 A CN 202010442261A CN 111599328 B CN111599328 B CN 111599328B
Authority
CN
China
Prior art keywords
song
recorded
recorded song
user account
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010442261.4A
Other languages
Chinese (zh)
Other versions
CN111599328A (en
Inventor
苏裕贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202010442261.4A priority Critical patent/CN111599328B/en
Publication of CN111599328A publication Critical patent/CN111599328A/en
Application granted granted Critical
Publication of CN111599328B publication Critical patent/CN111599328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/366Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems with means for modifying or correcting the external signal, e.g. pitch correction, reverberation, changing a singer's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/005Musical accompaniment, i.e. complete instrumental rhythm synthesis added to a performed melody, e.g. as output by drum machines
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/125Medley, i.e. linking parts of different musical pieces in one single piece, e.g. sound collage, DJ mix

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

The application discloses a song synthesis method, device, equipment and storage medium, and belongs to the technical field of audio and video processing. The method comprises the following steps: at least two recorded songs are obtained, wherein the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account and the second user account are different. Acquiring a first recorded song clip according to the first recorded song, wherein the first recorded song clip is a song clip in the first recorded song; and obtaining a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song. A chorus song is generated based on the first recorded song segment and the second recorded song segment. The present application provides a new way of synthesizing chorus songs.

Description

Song synthesis method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of audio and video processing technologies, and in particular, to a song synthesis method, apparatus, device, and storage medium.
Background
Singing clients with song-singing capability are one of the most popular entertainment type applications at the present time. After logging in the singing client, the user can select song accompaniment and singing, and can chorus with other users.
Currently, chorus functions provided by singing clients. The singing roles of the song accompaniment are generally divided by the singing client according to the lyrics in the song accompaniment, and different singing roles correspond to different song segments in the song accompaniment. And the singing client records the song clips singed by a certain user under a certain singing role according to different singing roles selected by different users and generates a recording file. After the song segments of all the singing roles of the song accompaniment are recorded, the singing client merges the recording files corresponding to the singing roles to generate a song, and different users chorus the song.
In the process of realizing chorus of a song by different users, the singing client needs to divide singing roles for song accompaniment, and can synthesize chorus songs after all song segments of the singing roles are recorded. The way of synthesizing chorus songs is relatively single.
Disclosure of Invention
The application provides a song synthesizing method, device, equipment and storage medium, which can provide a new way for synthesizing chorus songs. The technical scheme is as follows:
according to an aspect of the present application, there is provided a song synthesizing method, the method including:
acquiring at least two recorded songs, wherein the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account and the second user account are different;
acquiring a first recorded song segment according to the first recorded song, wherein the first recorded song segment is a song segment in the first recorded song; obtaining a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song;
and generating a chorus song according to the first recorded song segment and the second recorded song segment.
According to another aspect of the present application, there is provided a song synthesizing apparatus, the apparatus including:
The first acquisition module is used for acquiring at least two recorded songs, wherein the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account and the second user account are different;
the second acquisition module is used for acquiring a first recorded song segment according to the first recorded song, wherein the first recorded song segment is a song segment in the first recorded song; the second acquisition module is further configured to acquire a second recorded song segment according to the second recorded song, where the second recorded song segment is a song segment in the second recorded song;
and the generation module is used for generating a chorus song according to the first recorded song segment and the second recorded song segment.
Optionally, the first obtaining module is configured to:
acquiring the first recorded song;
displaying at least one recommended recording song according to the first recording song recommendation, wherein the recommended recording song comprises a recording song of the second user account, and the recommended recording song and the first recording song comprise the same song accompaniment;
And responding to a selection instruction triggered by the first selection operation, and acquiring the second recorded song from the at least one recommended recorded song.
Optionally, the first obtaining module is configured to:
displaying the recorded song of the second user account;
responding to a selection instruction triggered by a second selection operation, and acquiring a second recorded song from the recorded songs of the second user account;
and acquiring the first recorded song from the recorded songs of the first user account according to the second recorded song.
Optionally, the first obtaining module is configured to:
displaying candidate recorded songs of the first user account according to the second recorded song, wherein the candidate recorded songs and the second recorded song comprise the same song accompaniment;
and responding to a selection instruction triggered by a third selection operation, and acquiring the first recorded song from the candidate recorded songs.
Optionally, the first obtaining module is configured to:
displaying a recorded song square interface, wherein the recorded song square interface comprises at least two public recorded songs, and the public recorded songs comprise the same song accompaniment;
and responding to a selection instruction triggered by a fourth selection operation, and acquiring the first recorded song and the second recorded song from the at least two public recorded songs.
Optionally, the second obtaining module is configured to:
responding to a selection instruction triggered by a fifth selection operation, and acquiring the first recorded song fragment from the first recorded song;
the second obtaining module is further configured to:
and responding to a selection instruction triggered by a sixth selection operation, and acquiring the second recorded song fragment from the second recorded song.
Optionally, the generating module is configured to:
when the recorded song segments with the same lyrics exist in the first recorded song segment and the second recorded song segment, overlapping the voice in the recorded song segments with the same lyrics to obtain an overlapped segment;
and generating a chorus song according to the superposition segment and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment.
According to yet another aspect of the present application, there is provided a computer device including a processor and a memory, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, the at least one instruction, the at least one program, the code set, or the set of instructions being loaded and executed by the processor to implement the song synthesizing method of the above aspect.
According to yet another aspect of the present application, there is provided a computer storage medium having stored therein at least one instruction, at least one program, a code set, or an instruction set, the at least one instruction, the at least one program, the code set, or the instruction set being loaded and executed by the processor to implement the song synthesizing method of the above aspect.
The beneficial effects that this application provided technical scheme brought include at least:
the method comprises the steps of obtaining a first recorded song segment according to a first recorded song of at least two recorded songs, obtaining a second recorded song segment according to a second recorded song of the at least two recorded songs, and generating a chorus song according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs needing to be synthesized for each selected recorded song to synthesize the chorus song. An efficient and flexible composition of chorus songs is achieved. The present application provides a new way of synthesizing chorus songs.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of implementing multiuser chorus according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a user interface for a "song plaza" provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a song composition system according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a song synthesizing method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of another song composition method according to an embodiment of the present application;
fig. 6 is a flowchart of a method for obtaining a recorded song according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a user interface for displaying details of a first recorded song provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a user interface for displaying recommended recording songs provided by an embodiment of the present application;
fig. 9 is a flowchart of another method for obtaining a recorded song according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a user interface for displaying recorded songs of a second user account, provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of a user interface for chorus mode selection provided by an embodiment of the present application;
fig. 12 is a flowchart of a method for obtaining a first recorded song from a recorded song of a first user account according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a user interface for selecting a first recorded song from among candidate recorded songs provided in an embodiment of the present application;
fig. 14 is a flowchart of another method for obtaining a recorded song according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a user interface for displaying recorded songs with multi-user chorus completed according to an embodiment of the present application;
fig. 16 is a schematic diagram of a user interface for a singing client to randomly acquire song segments according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a user interface for selecting song segments by a user provided in an embodiment of the present application;
fig. 18 is a schematic structural diagram of a song synthesizer according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of a terminal according to an embodiment of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart for implementing multiuser chorus according to an embodiment of the present application. As shown in fig. 1:
In step S1, the singing client obtains at least two recorded songs. Optionally, the singing client obtains a first recorded song of the first user, and recommends a recorded song of a second user including the same song accompaniment to the first user according to the first recorded song. Wherein the first user is different from the second user.
And when the singing client receives a selection instruction for selecting a second recorded song from the recommended recorded songs, acquiring the second recorded song.
Or, the singing client obtains the second recorded song of the second user according to the selection instruction of the first user on the second recorded song when the first user views the recorded song of the second user, and obtains the first recorded song of the first user which comprises the same song accompaniment with the second recorded song.
Or, the singing client obtains the first recorded song of the first user and the second recorded song of the second user, which comprise the same song accompaniment, according to a selection instruction of selecting at least two recorded songs in the song square by the third user.
The first user, the second user and the third user are any user in the singing client, and the first user, the second user and the third user are different.
Illustratively, FIG. 2 is a schematic diagram of a user interface for a "song plaza" provided by an embodiment of the present application. As shown in fig. 2, the "song square" displays recorded song information 201 of the released user account 1, recorded song information 202 of the user account 2, and recorded song information 203 of the user account 3. Recorded song information 201 of user account 1 includes the recorded song name of user account 1 and the account name of user account 1. The recorded song information 202 of the user account 2 includes the recorded song name of the user account 2 and the account name of the user account 2. The recorded song information 203 of the user account 3 includes the recorded song name of the user account 3 and the account name of the user account 3. The recorded songs of the user account 1, the recorded song of the user account 2 and the recorded song of the user account 3 are recorded songs comprising the same song accompaniment, and the song accompaniment is song 1. Optionally, the singing client also displays, for each recorded song of the user account, the chorus times of chorus of the recorded song with the recorded songs of other user accounts. When the singing client receives a click operation for the start chorus button 204, at least two songs are acquired from the released recorded songs.
In step S2, the singing client obtains a first recorded song segment from the first recorded song, and the singing client obtains a second recorded song segment from the second recorded song.
In step S3, the singing client generates a chorus song according to the first recorded song segment and the second recorded song segment, thereby completing multiuser chorus. In the process of generating the chorus song, the singing client acquires at least two recorded songs, and merges the recorded song fragments according to recorded song fragments in the at least two recorded songs to generate a chorus song. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs needing to be synthesized for each selected recorded song to synthesize the chorus song. An efficient and flexible composition of chorus songs is achieved. The embodiment of the application provides a new way of synthesizing chorus songs.
Fig. 3 is a schematic structural diagram of a song synthesizing system according to an embodiment of the present application, as shown in fig. 3, where the system includes: server 310, first terminal 320, and second terminal 330.
Alternatively, the server 310 is a server, or a server cluster formed by several servers, or a cloud computing service center, etc., which are not limited herein. The first terminal 320 is a terminal device including a microphone, such as a smart phone, a tablet computer, a desktop computer, a notebook computer, and the like. The second terminal 330 is a terminal device including a microphone, such as a smart phone, a tablet computer, a desktop computer, a notebook computer, and the like. A connection is established between the server 310 and the first terminal 320 through a wired network or a wireless network, and a connection is established between the server 310 and the second terminal 330 through a wired network or a wireless network. The number of terminals in the song composition system of fig. 3 that establish a connection with the server 310 is merely illustrative and not limiting of the song composition system provided by embodiments of the present application. As shown in fig. 3, in the embodiment of the present application, the first terminal 320 is a smart phone, and the second terminal 330 is a smart phone.
It should be noted that, the singing client is installed on the first terminal 320, and the first terminal 320 is connected to the server 310 through the singing client, and the server 310 is a server corresponding to the singing client. The second terminal 330 is provided with a singing client, the second terminal 330 is connected with the server 310 through the singing client, and the server 310 is a server corresponding to the singing client. Wherein the singing client on the first terminal 320 is the same as the singing client on the second terminal 330.
Fig. 4 is a flow chart of a song synthesizing method according to an embodiment of the present application. The method may be used for singing clients on any terminal in a song composition system as shown in fig. 3. As shown in fig. 4, the method includes:
step 401, obtaining at least two recorded songs, where the at least two recorded songs include a first recorded song of a first user account and a second recorded song of a second user account.
The singing client generates a chorus song according to the at least two recorded songs. Optionally, the client obtains a first recorded song of the first user account and a second recorded song of the second user account according to a selection operation of a user logging in the first user account. I.e. the user logging in the first user account selects the recorded songs of the other user accounts and the recorded songs of the first user account chorus. Or the client obtains the first recorded song of the first user account and the second recorded song of the second user account according to the selection operation of the user logging in other user accounts. I.e. the user logging in the other user accounts selects the recorded song of the first user account and the recorded song of the second user account to chorus. The first user account is any user account in the singing client, the second user account is any user account in the singing client, and the other user accounts are any user accounts in the singing client. The first user account is different from the second user account. The other user accounts are different from the first user account and the second user account.
Optionally, the singing client obtains the at least two recorded songs according to the identifiers of the at least two recorded songs. The identification of the recorded song is used to identify the recorded song. Optionally, the identification of the recorded song includes information capable of uniquely identifying the recorded song, such as a name of the recorded song or a serial number of the recorded song in the server.
Optionally, the singing client can also obtain three recorded songs or four recorded songs, which is not limited herein. Optionally, the first recorded song includes a first accompaniment identifier and the second recorded song includes a second accompaniment identifier. The accompaniment identifier is used to identify the song accompaniment in the recorded song. The first recorded song and the second recorded song comprise the same song accompaniment, i.e. the first accompaniment identifier is the same as the second accompaniment identifier. Alternatively, the accompaniment identification includes information capable of uniquely identifying the song accompaniment such as the name of the song accompaniment or the serial number of the song accompaniment in the server.
The first recorded song of the first user account refers to the recorded song uploaded by the user logging in the first user account in the singing client. Optionally, the first recorded song includes an identification of the first user account, the identification including an account name of the first user account. The second recorded song of the second user account refers to the recorded song uploaded by the user logging in the second user account in the singing client. Optionally, the second recorded song includes an identification of the second user account, the identification including an account name of the second user account.
Step 402, obtaining a first recorded song segment according to a first recorded song, wherein the first recorded song segment is a song segment in the first recorded song; and obtaining a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song.
The first recorded song segments include recorded song segments corresponding to a first target time period in the first recorded song, recorded song segments corresponding to first target lyrics in the first recorded song, and/or recorded song segments randomly determined by the singing client in the first recorded song. The first target time period includes a start time stamp and an end time stamp of the first recorded song segment in the first recorded song. Optionally, the singing client determines the first target time period according to a selection operation of a start time stamp and an end time stamp in the first recorded song by a user logging in the first user account. And the singing client determines a first target lyric according to the selection operation of the lyrics in the first recorded song by a user logging in the first user account. The first target lyrics comprise one or more lyrics in the first recorded song.
For example, the duration of the first recorded song is 05:00:00 (representing 5 minutes), the start time stamp of the first recorded song segment in the first recorded song is 01:00:00, and the end time stamp of the first recorded song segment in the first recorded song is 03:00:00, and the first recorded song segment is a recorded song segment of a portion of the first recorded song from 01:00:00 to 03:00:00.
The second recorded song segments comprise recorded song segments corresponding to a second target time period in the second recorded song, recorded song segments corresponding to second target lyrics in the second recorded song, and/or recorded song segments randomly determined by the singing client in the second recorded song. The second target time period includes a start time stamp and an end time stamp of the second recorded song segment in the second recorded song. Optionally, the singing client determines the second target time period according to a selection operation of the start time stamp and the end time stamp in the second recorded song by a user logging in the first user account. And the singing client determines second target lyrics according to the selection operation of the lyrics in the second recorded song by a user logging in the first user account. The second target lyrics comprise one or more lyrics in the second recorded song. Optionally, in the first recorded song segment and the second recorded song segment, there is a recorded song segment including the same lyrics.
Step 403, generating a chorus song according to the first recorded song segment and the second recorded song segment.
When the first recorded song clip and the second recorded song clip include different lyrics. The singing client generates a chorus song according to the time sequence of the lyrics in the recorded song segments and the first recorded song segment and the second recorded song segment.
When the first recorded song clip and the second recorded song clip include the same lyrics. Optionally, the singing client superimposes the voice of the recorded song segments with the same lyrics in the first recorded song segment and the second recorded song segment to obtain superimposed segments, and generates a chorus song according to the superimposed segments and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment and the time sequence of the lyrics in the recorded song segments.
Illustratively, the first recorded song segment includes lyrics a, b, and c, and the second recorded song segment includes lyrics b, c, and d. When the singing client merges the first recorded song segment and the second recorded song segment to generate a chorus song, overlapping the recorded song segments corresponding to the b lyrics and the c lyrics in the first recorded song segment with the voice of the recorded song segments corresponding to the b lyrics and the c lyrics in the second recorded song segment to obtain an overlapped segment. And combining the recorded song segment corresponding to the lyrics a in the first recorded song segment, the superimposed segment and the recorded song segment corresponding to the lyrics d in the second recorded song segment to generate a chorus song.
In summary, according to the song synthesizing method provided by the embodiment of the present application, a first recorded song segment is obtained according to a first recorded song of at least two recorded songs, a second recorded song segment is obtained according to a second recorded song of at least two recorded songs, and then a chorus song is generated according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs needing to be synthesized for each selected recorded song to synthesize the chorus song. An efficient and flexible composition of chorus songs is achieved. The embodiment of the application provides a new way of synthesizing chorus songs.
Fig. 5 is a flow chart of another song synthesizing method according to an embodiment of the present application. The method may be used for singing clients on any terminal in a song composition system as shown in fig. 3. As shown in fig. 5, the method includes:
step 501, obtaining at least two recorded songs, where the at least two recorded songs include a first recorded song of a first user account and a second recorded song of a second user account.
The first user account is any user account in the singing client. The second user account is any user account in the singing client. The first user account is different from the second user account. The first recorded song is any recorded song of the first user account. The second recorded song is any recorded song of the second user account. The first recorded song and the second recorded song include the same song accompaniment.
In one possible implementation, as shown in fig. 6, the implementation procedure of step 501 includes the following steps 5011a to 5011c:
in step 5011a, a first recorded song is obtained.
Optionally, when the singing client displays the user interface of the details of the first recorded song in the user interface of the user logging in the first user account, the singing client obtains the first recorded song. Or when the singing client receives the record song identification of at least one recommendation record song according to the recommendation of the first record song sent by the server, the singing client acquires the first record song. And the singing client acquires the first recorded song from the server according to the recorded song identification of the first recorded song.
In step 5011b, at least one referral recorded song according to the first recorded song recommendation is displayed, the recommended recorded song comprising a recorded song of the second user account, the recommended recorded song comprising the same song accompaniment as the first recorded song.
Optionally, the server obtains information of recorded songs of other user accounts including the accompaniment identifier according to the accompaniment identifier of the first recorded song obtained by the singing client, generates recommendation information, and then sends the recommendation information to the singing client. The singing client receives the recommendation information sent by the server, and displays at least one recommendation recorded song according to the recommendation of the first recorded song in a user interface of a user logging in the first user account according to the recommendation information. The recommendation information comprises information of other user accounts and information of recorded songs of other user accounts. The information of the other user accounts comprises account names of the other user accounts, and the information of the recorded songs of the other user accounts comprises the names of the recorded songs, the recorded song identifications of the recorded songs and/or the recorded time of the recorded songs.
Fig. 7 is a schematic diagram of a user interface for displaying details of a first recorded song according to an embodiment of the present application. As shown in fig. 7, the user interface includes a song title 701 of the first recorded song, that is, a song accompaniment title of the first recorded song. Account information 702 for the first user account. The account information 702 of the first user account includes an account name of the first user account and an avatar of the first user account. The find chorus button 703. And lyrics of the first recorded song, playing progress information of the first recorded song, and a playing control button of the first recorded song. When the singing client detects a click operation for the 'find chorus' button, at least one recommended recorded song according to the first recorded song is displayed. Optionally, the singing client displays at least one recommended recording song according to the first recording song in the current user interface, or the singing client displays at least one recommended recording song according to the first recording song in the user interface for displaying recommended recording songs.
Fig. 8 is a schematic diagram of a user interface for displaying recommended recording songs according to an embodiment of the present application. As shown in fig. 8, the user interface includes a song title of the recommended recording song, i.e., a song accompaniment title of the recommended recording song. The recommended recorded song information 801 of user account 1, the recommended recorded song information 802 of user account 2, and the recommended recorded song information 803 of user account 3. The recommended recording song information comprises account names of other user accounts, recording song names of the recommended recording songs and the times of chorus of the user accounts corresponding to the recommended recording songs. Optionally, the singing client displays all recommended recorded songs according to the recommendation information sent by the server. Or displaying the recommended recorded songs of the user account establishing the friend relation with the first user account.
In step 5011c, a second recorded song is obtained from the at least one first referral recorded song in response to the selection instruction triggered by the first selection operation.
The first selection operation refers to a selection operation of a user logging in the first user account for recommending recorded songs. Optionally, the selection instruction triggered by the first selection operation includes a recorded song identifier of the second recorded song, and the singing client obtains the second recorded song from the server according to the recorded song identifier. Optionally, the singing client may further obtain a plurality of recorded songs from at least one recommended recorded song, where the user accounts corresponding to the plurality of recorded songs are different from the first user account and the second user account.
Illustratively, with continued reference to FIG. 8, the recommended recording song information in the user interface also includes a selection box for the recommended recording song and a chorus button 804. When the selection box of the recommended recording song of the user account 1 is in the selected state and the singing client detects the clicking operation for the chorus button 804, the singing client obtains a second recording song from the server according to the recording song identification of the recommended recording song of the user account 1.
In another possible implementation, as shown in fig. 9, the implementation procedure of step 501 includes the following steps 5012a to 5012c:
in step 5012a, the recorded song of the second user account is displayed.
Optionally, the recorded song of the second user account displayed by the singing client is any recorded song of the second user account. The singing client displays the recorded song of the second user account in a user interface of the user logging in the first user account. Optionally, when the singing client receives a selection operation of a user logging in the first user account for the recorded song of the second user account, displaying the recorded song of the second user account. I.e. a user logging in the first user account views the recorded song of the second user account.
Fig. 10 is a schematic diagram of a user interface for displaying a recorded song of a second user account according to an embodiment of the present application. As shown in fig. 10, the user interface includes a song title 1001 of the recorded song of the second user account. Account information 1002 for the second user account. The account information 1002 of the second user account includes an account name of the second user account and an avatar of the second user account. And lyrics of the recorded song of the second user account, playing progress information of the recorded song of the second user account and a playing control button of the recorded song of the second user account.
In step 5012b, a second recorded song is obtained from the recorded songs of the second user account in response to the selection instruction triggered by the second selection operation.
The second selection operation refers to a selection operation of a user logging in the first user account for the recorded song of the second user account. Optionally, the selection instruction triggered by the second selection operation includes a recorded song identifier of the second recorded song, and the singing client obtains the second recorded song from the server according to the recorded song identifier.
Illustratively, with continued reference to FIG. 10, a "chorus with user 2" button 1003 is also included in the user interface. When the singing client detects clicking operation for the 'chorus with user 2' button, the second recorded song is obtained from the server according to the recorded song identification of the second recorded song in the selection instruction triggered by the clicking operation.
Optionally, after the singing client obtains the second recorded song from the recorded songs in the second user account according to the selection instruction triggered by the second selection operation, a user interface for selecting a chorus mode is displayed. Illustratively, FIG. 11 is a schematic diagram of a user interface for chorus mode selection provided by an embodiment of the present application. As shown in fig. 11, the user interface includes a chorus mode selection popup window 1101, where the chorus mode selection popup window 1101 includes a "select existing chorus" option 1102, that is, a user logging in the first user account selects a chorus of the first recorded song and the second recorded song from the recorded songs of the first user account. The chorus mode selection popup 1101 also includes a "re-record chorus" option 1103, i.e., the user logging into the first user account re-records the first recorded song according to the song accompaniment of the second recorded song and chorus with the second recorded song.
In step 5012c, a first recorded song is obtained from the recorded songs of the first user account based on the second recorded song.
Optionally, as shown in fig. 12, the implementation procedure of the step 5012c includes the following steps 51a and 51b:
in step 51a, according to the second recorded song, a candidate recorded song of the first user account is displayed, the candidate recorded song including the same song accompaniment as the second recorded song.
Optionally, the candidate recorded song of the first user account includes any recorded song of the first user account. And displaying the candidate recorded songs of the first user account according to the second recorded song, namely displaying the recorded songs of the first user account with the same song accompaniment by the singing client according to the song accompaniment of the second recorded song.
Illustratively, with continued reference to FIG. 11, the candidate recorded song for the first user account is displayed in accordance with the second recorded song. And after triggering the 'chorus selection' option, displaying the candidate recorded songs of the first user account.
In step 51b, a first recorded song is obtained from the candidate recorded songs in response to a selection instruction triggered by the third selection operation.
The third selection operation refers to a selection operation of the candidate recorded song by a user logging in the first user account. Optionally, the selection instruction triggered by the third selection operation includes a recorded song identifier of the first recorded song, and the singing client obtains the first recorded song from the server according to the recorded song identifier.
Fig. 13 is a schematic diagram of a user interface for selecting a first recorded song from candidate recorded songs according to an embodiment of the present application. As shown in fig. 13, the user interface includes a recording song selection window 1301, where the recording song selection window 1301 includes first candidate recording song information 1302 and second candidate recording song information 1303 of the first user account. The candidate recorded song information includes song accompaniment of the candidate recorded song and uploading time of the candidate recorded song. When the singing client detects clicking operation aiming at candidate recorded song information, the singing client acquires a first recorded song from the server according to the identification of the recorded song in the selection instruction triggered by the clicking operation.
In yet another possible implementation, as shown in fig. 14, the implementation procedure of step 501 includes the following steps 5013a and 5012b:
in step 5013a, a recorded song plaza interface is displayed, the recorded song plaza interface comprising at least two public recorded songs, the public recorded songs comprising the same song accompaniment.
Optionally, the public recorded song refers to a recorded song that is released to the recorded song square by a user who logs in any user account in the singing client. Optionally, the public recorded song includes any recorded song in the singing client.
For example, referring to fig. 2, a recorded song plaza interface displayed by the singing client may be referred to herein for brevity.
In step 5012b, a first recorded song and a second recorded song are obtained from at least two public recorded songs in response to a selection instruction triggered by the fourth selection operation.
The fourth selection operation refers to a selection operation of a user logging in any user account for a public recorded song. Optionally, the selection instruction triggered by the fourth selection operation includes a recorded song identifier of the first recorded song, a recorded song identifier of the second recorded song, and/or recorded song identifiers of other recorded songs. Wherein, the user account corresponding to each recorded song is different. The singing client obtains the first recorded song, the second recorded song and/or other recorded songs from the server according to the recorded song identification.
Optionally, the recorded songs in the embodiment of the present application further include recorded songs for which multiuser chorus has been completed. For example, the recorded song of user account 1 is a1, the recorded song of user account 2 is b2, and the recorded song of user account 3 is c3. Wherein recorded song a1, recorded song b2, and recorded song c3 comprise the same song accompaniment. The user logging in the user account 1 selects the recorded song b2 of the user account 2 and the recorded song a1 of the user to chorus, and the recorded song a2 is obtained. A user logged into user account 4 may select a chorus of recorded song a2 and recorded song c3 of user account 3 from the public recorded songs.
Fig. 15 is a schematic diagram of a user interface for displaying recorded songs with multi-user chorus completed according to an embodiment of the present application. As shown in fig. 15, the user interface includes a song title 1501 of the recorded song that completes the multiuser chorus. Chorus user account information 1502. The chorus user account information 1502 includes an account name of the first chorus user account, an avatar of the first chorus user account, an account name of the second chorus user account, and an avatar of the second chorus user account. The "chorus" button 1503. And lyrics of the recorded song completing the multiuser chorus, playing progress information of the recorded song completing the multiuser chorus, and a playing control button of the recorded song completing the multiuser chorus. When the singing client detects a click operation for the "chorus" button, multiuser chorus may be performed according to the method in steps 5012a to 5012c described above.
Step 502, responding to a selection instruction triggered by the fifth selection operation, and acquiring a first recorded song segment from the first recorded song.
The fifth selection operation refers to a selection operation for a song clip in the first recorded song. Optionally, the selection instruction triggered by the fifth selection operation includes a first target time period or a first target lyric identifier. The first target time period includes a start time stamp and an end time stamp of the first recorded song segment in the first recorded song. The first target lyrics identification identifies segments included in the first recorded song for identifying the first recorded song segments. And the singing client acquires the first recorded song fragment from the first recorded song according to the first target time period or the first target lyric identifier. When the selection instruction triggered by the fifth selection operation does not include the first target time period and the first target lyric identifier, the singing client randomly acquires the first recorded song fragment from the first recorded song.
Fig. 16 is a schematic diagram of a user interface for a singing client to randomly acquire song segments according to an embodiment of the present application. As shown in fig. 16, the user interface includes a song title 1601 of the first recorded song of the user account 1, that is, a song accompaniment of the first recorded song. User account information 1602 for user account 1, user account information 1603 for user account 2, and lyrics fragments corresponding to the user account. User account information 1602 for user account 1 includes the account name of user account 1 and the avatar of user account 1, and user account information 1603 for user account 2 includes the account name of user account 2 and the avatar of user account 2. The song segments randomly acquired by the singing client from the first recorded song of the user account 1 comprise song segments corresponding to lyrics 1, lyrics 2 and lyrics 3.
Step 503, responding to a selection instruction triggered by the sixth selection operation, and obtaining a second recorded song segment from the second recorded song.
The sixth selection operation refers to a selection operation for a song clip in the second recorded song. Optionally, the selection instruction triggered by the sixth selection operation includes a second target time period or a second target lyric identifier. The second target time period includes a start time stamp and an end time stamp of the second recorded song segment in the second recorded song. The second target lyrics identification identifies segments included in the second recorded song for identifying the second recorded song segments. And the singing client acquires a second recorded song fragment from the second recorded song according to the second target time period or the second target lyric identifier. And when the selection instruction triggered by the sixth selection operation does not comprise the second target time period and the second target lyric identifier, the singing client randomly acquires a second recorded song fragment from the second recorded song.
With continued reference to fig. 16, the song segments randomly acquired by the singing client from the second recorded song of the user account 2 include song segments corresponding to lyrics 4, 5, and 6.
Illustratively, FIG. 17 is a schematic diagram of a user interface for user selection of song segments provided by embodiments of the present application. As shown in fig. 17, the user interface includes a song title 1601 of the first recorded song, that is, a song accompaniment of the first recorded song. An "I sing" identifier 1702 and a "chorus" identifier 1703, a determination button 1704, lyrics of the first recorded song, and a selection box corresponding to each lyric. When the selection box under the "i sing" sign 1702 is selected, it indicates that the lyrics corresponding to the selection box in the horizontal direction belong to the first recorded song segment and not to the second recorded song segment. When the selection box below the chorus mark 1703 is selected, it indicates that the lyrics corresponding to the selection box in the horizontal direction belong to both the first recorded song segment and the second recorded song segment. When both boxes in the horizontal direction of a certain word are not selected, the lyrics are not included in the first recorded song segment and the second recorded song segment. When the singing client detects a click operation for the determination button, a first recorded song segment is acquired from the first recorded song and a second recorded song segment is acquired from the second recorded song according to the selected lyrics.
Step 504, a chorus song is generated according to the first recorded song segment and the second recorded song segment.
When the first recorded song clip and the second recorded song clip include different lyrics. The singing client generates a chorus song according to the time sequence of the lyrics in the recorded song segments and the first recorded song segment and the second recorded song segment.
When there is a recorded song clip with the same lyrics in the first recorded song clip and the second recorded song clip. Optionally, the singing client superimposes the voice of the recorded song segments with the same lyrics in the first recorded song segment and the second recorded song segment to obtain a superimposed segment. And generating a chorus song according to the superimposed segment and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment and the time sequence of the lyrics in the recorded song segments.
For example, after uploading the male voice portion of the first recorded song of a male and female antiphonal singing through the first user account, a female user cannot be quickly invited to chorus. The song synthesizing method provided by the embodiment of the application can be realized: the male user searches for a second recorded song of the second user account of the female voice portion of the first recorded song that has been uploaded to actively compose without waiting for other users to compose.
For example, a user logged into the first user account may find that the first user account has a recorded song that includes the same song accompaniment as the second recorded song when listening to the second recorded song of the second user account. The song synthesizing method provided by the embodiment of the application can be realized: and the user logging in the first user account selects a first recorded song from the first user account, selects a first recorded song fragment from the first recorded song and performs chorus with a second recorded song, and the user logging in the first user account does not need to repeatedly record for one time.
Optionally, the step of generating a chorus song based on the first recorded song segment and the second recorded song segment can also be performed by a server. After generating the chorus song, the server sends the chorus song to the singing client.
Alternatively, a recorded song in an embodiment of the present application refers to recorded song audio or recorded song video. When the recorded song in the embodiment of the present application is a recorded song video, the recorded song clip includes an audio clip and a video frame clip of the recorded song, and the video frame clip is a video frame clip synchronized with the audio clip. Optionally, when the singing client synthesizes the recorded song segments, the method further comprises synthesizing the video picture segments of the recorded song segments. The singing client arranges the video picture segments vertically or longitudinally and combines the video picture segments into one video picture.
In summary, according to the song synthesizing method provided by the embodiment of the present application, a first recorded song segment is obtained according to a first recorded song of at least two recorded songs, a second recorded song segment is obtained according to a second recorded song of at least two recorded songs, and then a chorus song is generated according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs needing to be synthesized for each selected recorded song to synthesize the chorus song. An efficient and flexible composition of chorus songs is achieved. The embodiment of the application provides a new way of synthesizing chorus songs.
Optionally, the user can select the recorded song recommended according to the recorded song and perform chorus with the recorded song, so that the difficulty of searching chorus songs by the user is reduced.
Optionally, when the user views the recorded songs of other users, the user can select the recorded songs of the user to chorus with the recorded songs of other users, so that the operation of the user is simplified.
Optionally, the user can select at least two songs to chorus in the recorded song square interface, so that user experience is improved.
In addition, the user can sing in chorus with the recorded song which has finished chorus by multiple users, thus improving the interest of synthesizing chorus songs.
It should be noted that, the sequence of the steps of the song synthesizing method provided in the embodiment of the present application may be appropriately adjusted, the steps may also be correspondingly increased or decreased according to the situation, and any method that is familiar with the technical field and can easily think of changes within the technical scope of the present application should be covered within the protection scope of the present application, so that no further description is provided.
Fig. 18 is a schematic structural diagram of a song combining apparatus according to an embodiment of the present application. The device can be used for singing clients on any terminal in a song composition system as shown in fig. 3. As shown in fig. 18, the apparatus 180 includes:
the first obtaining module 1801 is configured to obtain at least two recorded songs, where the at least two recorded songs include a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song include the same song accompaniment, and the first user account and the second user account are different.
The second obtaining module 1802 is configured to obtain a first recorded song segment according to a first recorded song, where the first recorded song segment is a song segment in the first recorded song; the second obtaining module is further configured to obtain a second recorded song segment according to the second recorded song, where the second recorded song segment is a song segment in the second recorded song.
The generating module 1803 is configured to generate a chorus song according to the first recorded song segment and the second recorded song segment.
In summary, according to the song synthesizing device provided in the embodiment of the present application, the first recorded song segment is obtained through the second obtaining module according to the first recorded song of the at least two recorded songs, the second recorded song segment is obtained through the second obtaining module according to the second recorded song of the at least two recorded songs, and then a chorus song is generated through the generating module according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs needing to be synthesized for each selected recorded song to synthesize the chorus song. An efficient and flexible composition of chorus songs is achieved. The embodiment of the application provides a new way of synthesizing chorus songs.
Optionally, the first obtaining module 1801 is configured to:
a first recorded song is obtained. And displaying at least one recommended recording song according to the first recording song recommendation, wherein the recommended recording song comprises a recording song of a second user account, and the recommended recording song and the first recording song comprise the same song accompaniment. And responding to a selection instruction triggered by the first selection operation, and acquiring a second recorded song from at least one first recommendation recorded song.
Optionally, the first obtaining module 1801 is configured to:
displaying the recorded song of the second user account. And responding to a selection instruction triggered by the second selection operation, and acquiring a second recorded song from the recorded songs of the second user account. And acquiring the first recorded song from the recorded songs of the first user account according to the second recorded song.
Optionally, the first obtaining module 1801 is configured to:
and displaying the candidate recorded songs of the first user account according to the second recorded song, wherein the candidate recorded songs and the second recorded song comprise the same song accompaniment. And responding to a selection instruction triggered by the third selection operation, and acquiring the first recorded song from the candidate recorded songs.
Optionally, the first obtaining module 1801 is configured to:
displaying a recorded song plaza interface, wherein the recorded song plaza interface comprises at least two public recorded songs, and the public recorded songs comprise the same song accompaniment. And responding to a selection instruction triggered by the fourth selection operation, and acquiring the first recorded song and the second recorded song from at least two public recorded songs.
Optionally, the second obtaining module 1802 is configured to:
and responding to a selection instruction triggered by the fifth selection operation, and acquiring a first recorded song fragment from the first recorded song.
The second acquisition module 1802 is further configured to:
and responding to a selection instruction triggered by the sixth selection operation, and acquiring a second recorded song fragment from the second recorded song.
Optionally, a generating module 1803 is configured to:
when the recorded song segments with the same lyrics exist in the first recorded song segment and the second recorded song segment, the human voice in the recorded song segments with the same lyrics is overlapped to obtain an overlapped segment. And generating a chorus song according to the superimposed segment and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment.
In summary, according to the song synthesizing device provided in the embodiment of the present application, the first recorded song segment is obtained through the second obtaining module according to the first recorded song of the at least two recorded songs, the second recorded song segment is obtained through the second obtaining module according to the second recorded song of the at least two recorded songs, and then a chorus song is generated through the generating module according to the first recorded song segment and the second recorded song segment. The user can freely select the recorded songs of different user accounts, and can flexibly select the recorded song segments of the recorded songs needing to be synthesized for each selected recorded song to synthesize the chorus song. An efficient and flexible composition of chorus songs is achieved. The embodiment of the application provides a new way of synthesizing chorus songs.
Optionally, the user can select the recorded song recommended according to the recorded song and perform chorus with the recorded song, so that the difficulty of searching chorus songs by the user is reduced.
Optionally, when the user views the recorded songs of other users, the user can select the recorded songs of the user to chorus with the recorded songs of other users, so that the operation of the user is simplified.
Optionally, the user can select at least two songs to chorus in the recorded song square interface, so that user experience is improved.
In addition, the user can sing in chorus with the recorded song which has finished chorus by multiple users, thus improving the interest of synthesizing chorus songs.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and each module described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
Embodiments of the present application also provide a computer device comprising: the device comprises a processor and a memory, wherein at least one instruction, at least one section of program, a code set or an instruction set is stored in the memory, and the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by the processor to realize the song synthesizing method provided by each method embodiment.
The computer device may be a terminal. Fig. 19 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Generally, terminal 1900 includes: a processor 1901 and a memory 1902.
Processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1901 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1901 may also include a main processor, which is a processor for processing data in the awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1901 may incorporate a GPU (Graphics Processing Unit, image processor) for rendering and rendering content required for display by the display screen. In some embodiments, the processor 1901 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1902 may include one or more computer-readable storage media, which may be non-transitory. Memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1902 is used to store at least one instruction for execution by processor 1901 to implement the song composition methods provided by the method embodiments herein.
In some embodiments, terminal 1900 may optionally further include: a peripheral interface 1903 and at least one peripheral. The processor 1901, memory 1902, and peripheral interface 1903 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 1903 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1904, display 1905, camera assembly 1906, audio circuitry 1907, positioning assembly 1908, and power supply 1909.
Peripheral interface 1903 may be used to connect at least one Input/Output (I/O) related peripheral to processor 1901 and memory 1902. In some embodiments, processor 1901, memory 1902, and peripheral interface 1903 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1901, memory 1902, and peripheral interface 1903 may be implemented on separate chips or circuit boards, which are not limited in this application.
The Radio Frequency circuit 1904 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuit 1904 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1904 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1904 may also include NFC (Near Field Communication ) related circuits, which are not limited in this application.
The display 1905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display 1905 is a touch display, display 1905 also has the ability to collect touch signals at or above the surface of display 1905. The touch signal may be input as a control signal to the processor 1901 for processing. At this point, the display 1905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1905 may be one, providing a front panel of the terminal 1900; in other embodiments, the display 1905 may be at least two, each disposed on a different surface of the terminal 1900 or in a folded configuration; in still other embodiments, display 1905 may be a flexible display disposed on a curved surface or a folded surface of terminal 1900. Even more, the display screen 1905 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 1905 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 1906 is used to capture images or video. Optionally, camera assembly 1906 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of terminal 1900 and the rear camera is disposed on the rear of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1906 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 1901 for processing, or inputting the electric signals to the radio frequency circuit 1904 for realizing voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, each disposed at a different location on the terminal 1900. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1901 or the radio frequency circuit 1904 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 1907 may also include a headphone jack.
The location component 1908 is used to locate the current geographic location of the terminal 1900 for navigation or LBS (Location Based Service, location based services). The positioning component 1908 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, or the galileo system of russia.
A power supply 1909 is used to power the various components in terminal 1900. The power supply 1909 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1909 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1900 also includes one or more sensors 1910. The one or more sensors 1910 include, but are not limited to: acceleration sensor 1911, gyroscope sensor 1912, pressure sensor 1913, fingerprint sensor 1914, optical sensor 1915, and proximity sensor 1916.
The acceleration sensor 1911 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 1900. For example, the acceleration sensor 1911 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1901 may control the touch display 1905 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by the acceleration sensor 1911. Acceleration sensor 1911 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1912 may detect a body direction and a rotation angle of the terminal 1900, and the gyro sensor 1912 may collect a 3D motion of the user on the terminal 1900 in cooperation with the acceleration sensor 1911. The processor 1901 may implement the following functions based on the data collected by the gyro sensor 1912: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1913 may be disposed on a side border of terminal 1900 and/or below touch display 1905. When the pressure sensor 1913 is disposed on the side frame of the terminal 1900, a grip signal of the terminal 1900 by the user can be detected, and the processor 1901 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 1913. When the pressure sensor 1913 is disposed at the lower layer of the touch display screen 1905, the processor 1901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1905. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1914 is used to collect a fingerprint of the user, and the processor 1901 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1914, or identifies the identity of the user based on the collected fingerprint by the fingerprint sensor 1914. Upon recognizing that the user's identity is a trusted identity, the processor 1901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, and the like. The fingerprint sensor 1914 may be disposed on the front, back, or side of the terminal 1900. When a physical key or vendor Logo is provided on terminal 1900, fingerprint sensor 1914 may be integrated with the physical key or vendor Logo.
The optical sensor 1915 is used to collect ambient light intensity. In one embodiment, the processor 1901 may control the display brightness of the touch display 1905 based on the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1905 is turned high; when the ambient light intensity is low, the display brightness of the touch display screen 1905 is turned down. In another embodiment, the processor 1901 may also dynamically adjust the shooting parameters of the camera assembly 1906 based on the ambient light intensity collected by the optical sensor 1915.
A proximity sensor 1916, also referred to as a distance sensor, is typically provided on the front panel of terminal 1900. The proximity sensor 1916 serves to collect a distance between a user and the front of the terminal 1900. In one embodiment, when the proximity sensor 1916 detects a gradual decrease in the distance between the user and the front face of the terminal 1900, the processor 1901 controls the touch display 1905 to switch from the bright screen state to the off screen state; when the proximity sensor 1916 detects that the distance between the user and the front surface of the terminal 1900 gradually increases, the processor 1901 controls the touch display 1905 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 19 is not limiting and that terminal 1900 may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The embodiment of the application also provides a computer storage medium, which can store at least one instruction, at least one section of program, a code set or an instruction set, and the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by a processor to realize the song synthesizing method provided by each method embodiment.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely illustrative of the present application and is not intended to limit the invention to the particular embodiments shown, but on the contrary, the intention is to cover all modifications, equivalents, alternatives, and alternatives falling within the spirit and principles of the invention.

Claims (6)

1. A method of song composition, the method comprising:
acquiring at least two recorded songs, wherein the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account and the second user account are different;
acquiring a first recorded song segment according to the first recorded song, wherein the first recorded song segment is a song segment in the first recorded song; obtaining a second recorded song segment according to the second recorded song, wherein the second recorded song segment is a song segment in the second recorded song;
generating a chorus song according to the first recorded song segment and the second recorded song segment;
wherein, the obtaining at least two recorded songs includes:
displaying the recorded song of the second user account in a user interface of a user logging in the first user account; responding to a selection instruction triggered by a second selection operation of a user logging in the first user account on the recorded songs of the second user account, and acquiring the second recorded songs from the recorded songs of the second user account;
Displaying candidate recorded songs of the first user account in a user interface of a user logging in the first user account according to the second recorded song, wherein the candidate recorded songs and the second recorded song comprise the same song accompaniment; and responding to a selection instruction triggered by a third selection operation of a user logging in the first user account for the candidate recorded songs, and acquiring the first recorded songs from the candidate recorded songs.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the obtaining the first recorded song segment includes:
responding to a selection instruction triggered by a fifth selection operation, and acquiring the first recorded song fragment from the first recorded song;
the obtaining the second recorded song segment includes:
and responding to a selection instruction triggered by a sixth selection operation, and acquiring the second recorded song fragment from the second recorded song.
3. The method of claim 1, wherein generating a chorus song from the first recorded song segment and the second recorded song segment comprises:
when the recorded song segments with the same lyrics exist in the first recorded song segment and the second recorded song segment, overlapping the voice in the recorded song segments with the same lyrics to obtain an overlapped segment;
And generating a chorus song according to the superposition segment and the recorded song segments with different lyrics in the first recorded song segment and the second recorded song segment.
4. A song-synthesizing apparatus, the apparatus comprising:
the first acquisition module is used for acquiring at least two recorded songs, wherein the at least two recorded songs comprise a first recorded song of a first user account and a second recorded song of a second user account, the first recorded song and the second recorded song comprise the same song accompaniment, and the first user account and the second user account are different;
the second acquisition module is used for acquiring a first recorded song segment according to the first recorded song, wherein the first recorded song segment is a song segment in the first recorded song; the second obtaining module is further configured to obtain a second recorded song segment according to the second recorded song, where the second recorded song segment is a song segment in the second recorded song;
the generation module is used for generating a chorus song according to the first recorded song segment and the second recorded song segment;
wherein, the obtaining at least two recorded songs includes:
Displaying the recorded song of the second user account in a user interface of a user logging in the first user account; responding to a selection instruction triggered by a second selection operation of a user logging in the first user account on the recorded songs of the second user account, and acquiring the second recorded songs from the recorded songs of the second user account;
displaying candidate recorded songs of the first user account in a user interface of a user logging in the first user account according to the second recorded song, wherein the candidate recorded songs and the second recorded song comprise the same song accompaniment; and responding to a selection instruction triggered by a third selection operation of a user logging in the first user account for the candidate recorded songs, and acquiring the first recorded songs from the candidate recorded songs.
5. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one program that is loaded and executed by the processor to implement the song composition method of any one of claims 1 to 3.
6. A computer storage medium having stored therein at least one program loaded and executed by a processor to implement the song composition method of any one of claims 1 to 3.
CN202010442261.4A 2020-05-22 2020-05-22 Song synthesis method, device, equipment and storage medium Active CN111599328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010442261.4A CN111599328B (en) 2020-05-22 2020-05-22 Song synthesis method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010442261.4A CN111599328B (en) 2020-05-22 2020-05-22 Song synthesis method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111599328A CN111599328A (en) 2020-08-28
CN111599328B true CN111599328B (en) 2024-04-09

Family

ID=72192477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010442261.4A Active CN111599328B (en) 2020-05-22 2020-05-22 Song synthesis method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111599328B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014064066A (en) * 2012-09-19 2014-04-10 Sd Advisors Co Ltd Data generation method, data generation system, server unit for performing data generation, and program
CN106486128A (en) * 2016-09-27 2017-03-08 腾讯科技(深圳)有限公司 A kind of processing method and processing device of double-tone source audio data
CN108269560A (en) * 2017-01-04 2018-07-10 北京酷我科技有限公司 A kind of speech synthesizing method and system
CN109119057A (en) * 2018-08-30 2019-01-01 Oppo广东移动通信有限公司 Musical composition method, apparatus and storage medium and wearable device
CN110349559A (en) * 2019-07-12 2019-10-18 广州酷狗计算机科技有限公司 Carry out audio synthetic method, device, system, equipment and storage medium
CN110675848A (en) * 2019-09-30 2020-01-10 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method, device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10701008B2 (en) * 2015-12-17 2020-06-30 Facebook, Inc. Personal music compilation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014064066A (en) * 2012-09-19 2014-04-10 Sd Advisors Co Ltd Data generation method, data generation system, server unit for performing data generation, and program
CN106486128A (en) * 2016-09-27 2017-03-08 腾讯科技(深圳)有限公司 A kind of processing method and processing device of double-tone source audio data
CN108269560A (en) * 2017-01-04 2018-07-10 北京酷我科技有限公司 A kind of speech synthesizing method and system
CN109119057A (en) * 2018-08-30 2019-01-01 Oppo广东移动通信有限公司 Musical composition method, apparatus and storage medium and wearable device
CN110349559A (en) * 2019-07-12 2019-10-18 广州酷狗计算机科技有限公司 Carry out audio synthetic method, device, system, equipment and storage medium
CN110675848A (en) * 2019-09-30 2020-01-10 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method, device and storage medium

Also Published As

Publication number Publication date
CN111599328A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN110336960B (en) Video synthesis method, device, terminal and storage medium
CN110233976B (en) Video synthesis method and device
CN109033335B (en) Audio recording method, device, terminal and storage medium
CN108538302B (en) Method and apparatus for synthesizing audio
CN109168073B (en) Method and device for displaying cover of live broadcast room
CN110491358B (en) Method, device, equipment, system and storage medium for audio recording
CN110545476B (en) Video synthesis method and device, computer equipment and storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN108965757B (en) Video recording method, device, terminal and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN109144346B (en) Song sharing method and device and storage medium
CN110209871B (en) Song comment issuing method and device
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN110266982B (en) Method and system for providing songs while recording video
CN111402844B (en) Song chorus method, device and system
CN111711838B (en) Video switching method, device, terminal, server and storage medium
CN111142838A (en) Audio playing method and device, computer equipment and storage medium
CN110213624B (en) Online interaction method and device
CN113204672B (en) Resource display method, device, computer equipment and medium
CN108055349B (en) Method, device and system for recommending K song audio
CN111064657B (en) Method, device and system for grouping concerned accounts
CN112118482A (en) Audio file playing method and device, terminal and storage medium
CN109448676B (en) Audio processing method, device and storage medium
CN111294626A (en) Lyric display method and device
CN111314205B (en) Instant messaging matching method, device, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant