WO2014190786A1 - 一种异步合唱方法和装置 - Google Patents

一种异步合唱方法和装置 Download PDF

Info

Publication number
WO2014190786A1
WO2014190786A1 PCT/CN2014/072300 CN2014072300W WO2014190786A1 WO 2014190786 A1 WO2014190786 A1 WO 2014190786A1 CN 2014072300 W CN2014072300 W CN 2014072300W WO 2014190786 A1 WO2014190786 A1 WO 2014190786A1
Authority
WO
WIPO (PCT)
Prior art keywords
file
chorus
audio
audio file
accompaniment
Prior art date
Application number
PCT/CN2014/072300
Other languages
English (en)
French (fr)
Inventor
张鹏飞
杨振宇
张啸
林形省
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to EP14804158.5A priority Critical patent/EP3007163B1/en
Priority to KR1020157013606A priority patent/KR101686632B1/ko
Priority to BR112015015358-5A priority patent/BR112015015358B1/pt
Priority to RU2015121498A priority patent/RU2635835C2/ru
Priority to JP2015543298A priority patent/JP6085036B2/ja
Priority to MX2015007251A priority patent/MX361534B/es
Priority to US14/296,801 priority patent/US9224374B2/en
Publication of WO2014190786A1 publication Critical patent/WO2014190786A1/zh

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/365Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems the accompaniment information being stored on a host computer and transmitted to a reproducing terminal by means of a network, e.g. public telephone lines
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/245Ensemble, i.e. adding one or more voices, also instrumental voices
    • G10H2210/251Chorus, i.e. automatic generation of two or more extra voices added to the melody, e.g. by a chorus effect processor or multiple voice harmonizer, to produce a chorus or unison effect, wherein individual sounds from multiple sources with roughly the same timbre converge and are perceived as one
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/175Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments for jam sessions or musical collaboration through a network, e.g. for composition, ensemble playing or repeating; Compensation of network or internet delays therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds

Definitions

  • the present invention is based on the Chinese Patent Application No. 201310210338.5, filed on Jan. Application as a reference. Technical field
  • the embodiments of the present disclosure relate to the field of network technologies, and in particular, to an asynchronous chorus method and apparatus. Background technique
  • the mobile terminal has a social karaoke application, and the application has built-in reverberation and echo effects, which can modify the user's voice.
  • the app also provides the lyrics corresponding to the accompaniment. It can be displayed synchronously when K songs, and can be accurate to each word like KTV.
  • the app also provides interesting intelligent scoring function, and the score is obtained. Can be shared with friends.
  • a ⁇ song is played on a mobile terminal
  • a single ⁇ song is generally used, and after being sung, it is submitted to the server for saving and display, and the rest of the users who use the application can play the song and make an evaluation. If you want more people to sing, you need more users to sing at the same mobile terminal at the same time. After singing, submit it to the server for saving.
  • the embodiment of the present disclosure provides an asynchronous chorus method and device to solve the problem that the chorus effect is poor, the processing process is cumbersome, and the cost is high.
  • an embodiment of the present disclosure discloses an asynchronous chorus method, the method comprising: after receiving an audio file uploaded by a terminal requesting to use a first accompaniment file to participate in a chorus, marking the audio file has The part that is mixed and not marked is part of the chorus;
  • the annotated audio file is determined as a second accompaniment file; wherein the audio file is formed by encoding, by the terminal participating in the chorus using the first accompaniment file, the acquired audio information and the downloaded first accompaniment file.
  • the method further comprises: transmitting the second accompaniment file to a terminal requesting to participate in the chorus using the second accompaniment file.
  • the part of the audio file that has a mixed sound and is not labeled is used as a part of the chorus, and includes:
  • the portion of the audio file that has a mix and is not labeled is labeled as part of the chorus.
  • the part of the audio file that has a mixed sound and is not labeled is used as a part of the chorus, and includes:
  • the method further includes:
  • the volume information is sent to a terminal requesting to participate in the chorus using the second accompaniment file, prompting the user to perform the chorus using the volume.
  • the method further includes:
  • the initial accompaniment file after the annotation includes at least one paragraph.
  • paragraph for chorus marking the initial accompaniment file includes:
  • the present disclosure also discloses another asynchronous chorus method, the method comprising: collecting audio information, and encoding the audio information with a first accompaniment file downloaded from a server to form an audio Document
  • the portion of the audio file that has a mix and is not labeled is marked as part of the chorus, and the annotated audio file is uploaded to the server.
  • the part of the audio file that has a mixed sound and is not labeled is used as a part of the chorus, and includes:
  • the portion of the audio file that has a mix and is not labeled is labeled as part of the chorus.
  • the part of the audio file that has a mixed sound and is not labeled is used as a part of the chorus, and the package Includes:
  • the method further includes:
  • the present disclosure also discloses an asynchronous chorus device, characterized in that the device comprises:
  • a first labeling module configured to: after receiving an audio file uploaded by a terminal that requests to use the first accompaniment file to participate in the chorus, labeling the portion of the audio file that has a mixed sound and is not marked as a part of the chorus;
  • a determining module configured to determine the annotated audio file as a second accompaniment file; wherein the audio file is used by the terminal that requests to use the first accompaniment file to participate in the chorus to perform the collected audio information and the downloaded first accompaniment file
  • the code is formed.
  • the device further includes:
  • a first sending module configured to send the second accompaniment file to a terminal that requests to participate in the chorus using the second accompaniment file.
  • the first labeling module includes:
  • a first obtaining submodule configured to acquire an audio information location in the audio file
  • a first analysis submodule configured to analyze a portion of the audio information location having a sound, the portion of the sound being encoded by the collected audio information and the first accompaniment file;
  • the first mix labeling sub-module is used to mark a part of the audio file that has a mix and is not labeled as a part of the chorus.
  • the first mixing annotation sub-module includes:
  • a first change subunit configured to change a color of the display text corresponding to the portion of the audio file that has a mix and is not labeled
  • the device further includes:
  • a receiving module configured to: after receiving an audio file uploaded by a terminal that requests to use the first accompaniment file to participate in the chorus, receive the volume information of the collected audio information in the audio file that is requested by the terminal participating in the chorus using the first accompaniment file;
  • a second sending module configured to send the volume information to a terminal that requests to use the second accompaniment file to participate in the chorus, and prompt the user to perform the chorus by using the volume.
  • the device further includes:
  • a second labeling module for marking a paragraph of the initial accompaniment file for chorus
  • a third sending module configured to send an initial accompaniment file marked by the second labeling module to a terminal requesting to participate in chorus using the initial accompaniment file
  • the initial accompaniment file after the annotation includes at least one paragraph.
  • the second labeling module includes:
  • a reading submodule configured to read a time interval between each two characters in the initial accompaniment file
  • a comparison submodule configured to compare the time interval with a preset threshold
  • the text labeling sub-module is configured to mark the end of a paragraph between the two characters when the time interval between the two characters is greater than the preset threshold.
  • the present disclosure further discloses another asynchronous chorus device, characterized in that the device comprises:
  • An encoding module configured to collect audio information, and encode the audio information with a first accompaniment file downloaded from a server to form an audio file
  • the third labeling module is configured to mark the portion of the audio file that has a mix and is not labeled as a part of the chorus, and upload the marked audio file to the server.
  • the third labeling module includes:
  • a third obtaining submodule configured to acquire an audio information location in the audio file
  • a third analysis submodule configured to analyze a portion of the audio information location having a sound, the portion of the sound being encoded by the collected audio information and the first accompaniment file;
  • a third mixing annotation sub-module for labeling a portion of the audio file that has a mix and is not labeled as a portion of the chorus.
  • the third mixing annotation sub-module includes:
  • a third change subunit configured to change a color of the display text corresponding to the portion of the audio file that has a mix and is not labeled
  • a third labeling subunit configured to mark, in the name of the audio file, a portion of the audio file that has a mix and is not labeled.
  • the device further includes:
  • the record uploading module is configured to: after uploading the annotated audio file to the server, record volume information of the collected audio information in the audio file, and upload the volume information to the server.
  • the part of the audio file having the mixed sound and not marked is used as the part of the chorus;
  • the annotated audio file is determined to be a second accompaniment file; wherein the audio file is for requesting to use the first accompaniment file
  • the chorus terminal encodes the collected audio information with the downloaded first accompaniment file.
  • the user's terminal can download the audio files of other users' chorus as the accompaniment files of their own chorus, so that the chorus can sing their own parts at different times and in different places without being crowded or sounded by people. There are differences in size and distance, which leads to poor chorus effect.
  • each user can sing multiple times or sing part of their own singing while singing their own parts, without affecting the parts sung by other users, so This can lead to a re-singing of the entire song due to a poor user performance.
  • FIG. 1 is a flowchart of an asynchronous chorus method according to an exemplary embodiment
  • FIG. 2 is a flowchart of an asynchronous chorus method according to an exemplary embodiment 2;
  • FIG. 3 is a flowchart of an asynchronous chorus method according to an exemplary embodiment 3;
  • FIG. 4 is a flowchart of an asynchronous chorus method according to an exemplary embodiment 4;
  • FIG. 5 is a schematic diagram of an annotated initial accompaniment file according to an exemplary embodiment 5;
  • FIG. 6 is a structural block diagram of an asynchronous chorus device according to an exemplary embodiment 6;
  • FIG. 7 is a structural block diagram of an asynchronous chorus device according to an exemplary embodiment 7;
  • FIG. 8 is a structural block diagram of an asynchronous chorus apparatus according to an exemplary embodiment 8; FIG.
  • FIG. 9 is a structural block diagram of an asynchronous chorus apparatus according to an exemplary embodiment 9. detailed description
  • the chorus can sing its own parts at different times and in different places without being cumbersome due to crowded people or differences in size and distance of the sound, and each The user can process the part he sings separately without affecting the part that other users sing.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • chorus when chorus is performed, multiple users are required to sing at the same mobile terminal at the same time, and after the sing is finished, they are submitted to the server for processing, and the vocalization of the user may have a difference in size and distance, resulting in poor chorus effect; Moreover, if there is a user who performs poorly during the chorus, the song may be re-chosed, and the server needs to re-process the song of the chorus. The process is cumbersome and costly.
  • FIG. 1 a flowchart of an asynchronous chorus method according to Embodiment 1 of the present disclosure is shown. Includes:
  • step 101 after receiving the audio file requested by the terminal participating in the chorus using the first accompaniment file, the portion of the audio file having the mixed sound and not marked is used as the chorus portion.
  • the terminal provided by the embodiment of the present disclosure may be a smart phone, a tablet, or the like.
  • the first accompaniment file may be downloaded from the server first, and then the terminal that uses the first accompaniment file to participate in the chorus collects the audio information of the user, and may collect the audio information.
  • the audio information is encoded with the first accompaniment file downloaded by the terminal requesting the accompaniment using the first accompaniment file to form an audio file corresponding to the terminal requesting to participate in the chorus using the first accompaniment file, and uploaded to the server.
  • the server may mark the portion of the audio file that has a mix and is not marked as a part of the chorus.
  • the audio file is a terminal that requests to use the first accompaniment file to participate in the chorus to encode the collected audio information and the downloaded first accompaniment file.
  • step 102 the annotated accompaniment file is determined as the second accompaniment file.
  • the labeled accompaniment file may be determined as the second accompaniment file.
  • the second accompaniment file described above can be downloaded from the server, and the second accompaniment file can be directly used to participate in the chorus.
  • the user's terminal can download the audio files of other users' chorus as the accompaniment files of their own chorus, so that the chorus can sing their own parts at different times and in different places without Due to the crowded people or the difference in size and distance between the sounds, the chorus effect is poor.
  • each user can sing multiple times or sing the part they sing separately without affecting other users while singing their own parts. The part of the singer, therefore, does not lead to the re-singing of the entire song due to a poor performance of one user.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 2 a flowchart of an asynchronous chorus method according to Embodiment 2 of the present disclosure is shown, which may include:
  • step 201 the paragraph for the chorus of the initial accompaniment file is marked, and the annotated initial accompaniment file is sent to the terminal requesting to participate in the chorus using the initial accompaniment file.
  • the initial accompaniment file may be annotated first, and the annotated initial accompaniment file may be sent to a terminal requesting to participate in the chorus using the initial accompaniment file.
  • the labeled initial accompaniment file may include at least one paragraph.
  • the initial accompaniment file can be automatically annotated with a read time interval. Therefore
  • the above process for marking the passage of the initial accompaniment file for chorus may include:
  • the annotation proposed in the embodiment of the present disclosure may be marked with a special symbol (for example, a dot) between two characters.
  • a special symbol for example, a dot
  • the two characters exist in two paragraphs in the accompaniment file, and the special symbol can be used as a The mark at the end of the paragraph; or distinguish between male and female singers, between the two words marked "male:” or "female:”, at this time, the two words exist in the two paragraphs in the accompaniment file, the mark "male: "Or” female: “Can be used as a mark for the end of a paragraph.
  • the initial accompaniment file may be labeled in other manners, for example, by using different color labels, etc., and the embodiment of the present disclosure does not limit this.
  • the labeling can be made more accurate.
  • the specific value of the above-mentioned preset threshold value those skilled in the art can set according to actual experience.
  • the embodiment of the present disclosure may also mark the initial accompaniment file in other manners, for example, according to the pitch of the accompaniment, and the like, which is not limited by the embodiment of the present disclosure.
  • step 202 after receiving the audio file requested by the terminal participating in the chorus using the first accompaniment file, the part of the audio file having the mixed sound and not marked is used as the part of the chorus.
  • chorus can be performed by multiple users through different terminals.
  • the terminal requesting to participate in the chorus can collect the audio information of the user, and then encode the audio information with the accompaniment file downloaded by the terminal participating in the chorus to form an audio file, and finally can The encoded audio file is uploaded to the server.
  • the server may mark the portion of the audio file that has a mix and is not marked as a part of the chorus.
  • the process of the above-mentioned labeled audio file having a mixed and unlabeled portion as a part of the chorus may include: bl, obtaining an audio information position in the audio file; Hi, analyzing the portion of the audio information position that has a mix, and the portion of the mix is formed by encoding the collected audio information with the first accompaniment file;
  • the portion of the audio file that has a mix and is not labeled may be marked in the following manner:
  • the audio file may have a mix and is not The color of the displayed text corresponding to the marked part is marked in red); or the text in the audio file is marked with a portion of the audio file that has a mix and is not marked (for example, the text in the name of the audio file can be used to indicate Which part I sang).
  • the above two types of labeling may be performed on the portion of the audio file that has been mixed and not labeled. The embodiment of the present disclosure does not limit this.
  • the portion of the audio file that has a mix and is not labeled may be marked as a part of the chorus in other manners, for example, the text corresponding to the part with the mix and not marked is bolded, and the like.
  • the embodiment of the present disclosure does not limit this.
  • step 203 the annotated audio file is determined as the second accompaniment file.
  • the labeled audio file can be determined as the second accompaniment file, and the user requesting the terminal participating in the chorus using the second accompaniment file can sing according to the second accompaniment file.
  • the second accompaniment file is sent to the terminal requesting to participate in the chorus using the second accompaniment file.
  • the server may transmit the determined second accompaniment file to the terminal requesting to participate in the chorus using the second accompaniment file. Since different paragraphs are marked in the initial accompaniment file, the user who requests the terminal to participate in the chorus using the second accompaniment file can chorus the paragraph corresponding to itself according to the label in the initial accompaniment file, and is marked in the second accompaniment file downloaded.
  • the chorus part is chorus.
  • the collected audio information may be encoded with the second accompaniment file, an audio file is generated, and the audio file is uploaded to the server, and then the audio file is uploaded to the server.
  • the audio file uploaded by the terminal participating in the chorus is requested to be marked with the second accompaniment file, and the above process is repeated.
  • step 205 volume information belonging to the collected audio information among the audio files requested by the terminal participating in the chorus using the first accompaniment file is received.
  • a volume reminder may also be adopted in the embodiment of the present disclosure.
  • the terminal can record the volume information of the collected audio information, and then upload the volume information of the collected audio information to the server.
  • the audio information uploaded by the terminal requesting to use the first accompaniment file to participate in the chorus may be received.
  • the volume information is sent to the terminal requesting to participate in the chorus using the second accompaniment file, prompting the user to perform the chorus with the volume.
  • the server may send the volume information to the terminal requesting to use the second accompaniment file to participate in the chorus, so that the user of the terminal can be prompted to perform the chorus using the above volume.
  • the user who requests the terminal participating in the chorus using the second accompaniment file can adjust the chorus volume of the user who uses the first accompaniment file to participate in the chorus terminal as requested, thereby further enhancing the effect of the chorus.
  • step 203 and step 205 may be performed side by side
  • step 204 and step 206 may be performed side by side, and so on, the specific sequence of the above steps is not used in the embodiment of the present disclosure. limit.
  • the chorus can sing its own parts at different times and in different places, without causing poor chorus effect due to crowdedness of the people or the difference in size and distance of the sound, and each The user can separately process the part that he sings without affecting the part that other users sing; in addition, the embodiment of the present disclosure can also record the volume information of the user chorus of the terminal requesting to participate in the chorus, and prompt the next request to participate. The user of the chorus terminal uses the volume to perform the chorus, thereby further enhancing the effect of the chorus.
  • the first embodiment and the second embodiment are mainly introduced from the server side to the asynchronous chorus method.
  • the following describes the third embodiment and the fourth embodiment from the terminal side.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • FIG. 4 a flowchart of an asynchronous chorus method according to Embodiment 3 of the present disclosure is shown, which may include:
  • step 301 audio information is collected, and the audio information is encoded with the first accompaniment file downloaded from the server to form an audio file.
  • the first accompaniment file may first be downloaded from the server, and then when the user performs the chorus, the terminal may collect the audio information of the user, and then download the audio information with the slave server.
  • the first accompaniment file is encoded to form an audio file.
  • step 302 the portion of the audio file that has a mix and is not labeled is part of the chorus, and the annotated audio file is uploaded to the server.
  • the terminal may mark the portion of the audio file that has a mix and is not labeled as a chorus portion, and upload the marked audio file to the server.
  • the server may use the labeled audio file as the second accompaniment file, and the terminal requesting to use the second accompaniment file to participate in the chorus may download the second accompaniment file from the server and directly use the second accompaniment.
  • the document participates in the chorus.
  • the user's terminal can download audio files sung by other users.
  • the piece is used as an accompaniment file of its own chorus, so that the chorus can sing his own parts at different times and in different places, without causing poor chorus effect due to crowded people or differences in size and distance.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • FIG. 4 a flowchart of an asynchronous chorus method according to Embodiment 4 of the present disclosure is shown, which may include:
  • step 401 audio information is collected, and the audio information is encoded with the first accompaniment file downloaded from the server to form an audio file.
  • the terminal requesting to participate in the chorus using the first accompaniment file can collect the audio information of the chorus user, and then encode the audio information with the first accompaniment file downloaded from the server to form an audio file.
  • step 402 the portion of the audio file that has a mix and is not labeled is part of the chorus, and the annotated audio file is uploaded to the server.
  • the foregoing labeling process may be implemented by requesting a terminal participating in the chorus.
  • the process of having the mixed and unlabeled portion of the annotated audio file as the chorus portion may include:
  • the portion of the audio file that is mixed and unlabeled in the above-mentioned c3 may be implemented as the part of the chorus in the following manner:
  • the implementation of the present disclosure can also perform the above two annotations on portions of the audio file that have a mix and are not labeled.
  • step 403 the volume information belonging to the collected audio information in the audio file is recorded, and the volume information is uploaded to the server.
  • a volume reminder may also be adopted in the embodiment of the present disclosure.
  • the terminal can record the volume information of the collected audio information, and then upload the volume information of the collected audio information to the server.
  • the server may use the above-mentioned labeled audio file as the second accompaniment file, and the user who requests the terminal using the second accompaniment file is participating.
  • the second accompaniment file can be directly downloaded from the server for chorus, and the above volume information can be acquired at the same time to prompt the user to use the volume for chorus.
  • the above various processes in the embodiments of the present disclosure are not limited to the execution of the terminal requesting to participate in the chorus using the first accompaniment file, and any one terminal can be executed.
  • the volume information of the user chorus of the terminal requesting the chorus may be recorded, and the user of the terminal requesting the chorus is prompted to perform the chorus by using the volume, thereby further improving the effect of the chorus.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • the initial accompaniment file is marked by the server, and the initial accompaniment file after labeling is shown in FIG. 5.
  • the initial accompaniment file after the annotation may include three parts A, B, and C.
  • the three terminals may sing the three parts separately.
  • the user A's terminal may sing the label A.
  • the part marked B is sung by the terminal of the user B
  • the part marked C is sung by the terminal of the user C.
  • the specific process for labeling is described in the description of the server side below.
  • Terminal side The asynchronous chorus method proposed by the embodiment of the present disclosure will be described below for the terminal side and the server side, respectively.
  • User A's terminal downloads and plays the initial accompaniment file marked above from the server, and user A sings the part marked with A.
  • the terminal of user A collects the audio information of user A, and records the volume information of the audio information of user A.
  • the terminal of user A encodes the collected audio information and the initial accompaniment file to generate a song XI (ie, an audio file), and passes The terminal of the user A uploads the song XI and the volume information of the above user A to the server.
  • a song XI ie, an audio file
  • the user B's terminal downloads and plays the song XI from the server, uses it as a accompaniment, and prompts the user B to sing according to the volume information of the user A (for example, prompting in a waveform form), and the user B continues to sing. Mark the part of B.
  • the terminal of the user B collects the audio information of the user B, and records the volume information of the audio information of the user B.
  • the terminal of the user B encodes the collected audio information and the song XI to generate a song X2, and the song is sent through the terminal of the user B.
  • the volume information of X2 and the above user B is uploaded to the server.
  • the terminal of the user C downloads and plays the song X2 from the server, uses it as an accompaniment, and prompts the volume of the user C to sing according to the volume information of the user B, and the user C continues to sing the part marked with C.
  • the terminal of the user C collects the audio information of the user C, and records the volume information of the audio information of the user C.
  • the terminal of the user C encodes the collected audio information and the song X2 to generate a song X3, and the song is played through the terminal of the user C.
  • the volume information of X3 and the above user C is uploaded to the server, thereby completing the entire song.
  • the initial accompaniment file may be labeled in other ways, which is not limited by the embodiment of the present disclosure.
  • the specific process of the labeling reference may be made to the related description of the second embodiment, and the embodiments of the present disclosure are not discussed in detail herein.
  • the initial file may be marked as three parts, as shown in FIG. 3, that is, labeled as part A, part B, and part C.
  • processing the song uploaded by the terminal ie audio file
  • determining the sung part of the song according to the audio information location of the song For example, the portion of the song that has a mix can be analyzed.
  • the related description of the second embodiment For a specific process, refer to the related description of the second embodiment.
  • the lyrics of the part A that has been sung can be marked with different colors, or the part A of the song XI is marked, and the color of the lyrics and the name of the song can be marked at the same time.
  • the foregoing labeling process is implemented by a server. It should be noted that the labeling process may also be implemented by a terminal, which is not limited by the embodiment of the disclosure.
  • the chorus can sing his own parts at different times and in different places.
  • Each user can sing multiple times or sing their own voice separately when they sing their own parts, without affecting other users (for example, reverb or other sound effects can be provided, the part that the user sings himself There are special effects while the other users' sounds are unchanged).
  • the device may be a server-side device that interacts with the terminal side.
  • the above device may include:
  • the first labeling module 601 is configured to: after receiving the audio file uploaded by the terminal requesting to participate in the chorus, label the portion of the audio file that has a mix and is not marked as a part of the chorus;
  • the determining module 602 is configured to determine the annotated audio file as the second accompaniment file.
  • the audio file is formed by encoding the collected audio information and the downloaded first accompaniment file for the terminal requesting to use the first accompaniment file to participate in the chorus.
  • the user's terminal can download the audio files of other users' chorus as the accompaniment files of their own chorus, so that the chorus can sing their own parts at different times and in different places without Due to the crowded people or the difference in size and distance between the sounds, the chorus effect is poor.
  • each user can sing multiple times or sing the part they sing separately without affecting other users while singing their own parts. The part of the singer, therefore, does not lead to the re-singing of the entire song due to a poor performance of one user.
  • Example 7 Referring to FIG. 7, a structural block diagram of an asynchronous chorus device according to Embodiment 7 of the present disclosure is shown.
  • the device may be a server-side device that interacts with the terminal side.
  • the device can include:
  • a second labeling module 701, configured to mark a paragraph of the initial accompaniment file for chorus
  • the second labeling module 701 can include:
  • the reading sub-module 7011 is configured to read a time interval between each two characters in the initial accompaniment file; the comparison sub-module 7012 is configured to compare the time interval with a preset threshold;
  • the text labeling sub-module 7013 is configured to mark the end of a paragraph between two characters when the time interval between the two characters is greater than a preset threshold.
  • a third sending module 702 configured to send an initial accompaniment file marked by the second labeling module to a terminal that requests to use the initial accompaniment file to participate in the chorus;
  • the initial accompaniment file after the annotation includes at least one paragraph.
  • the first labeling module 703 is configured to: after receiving the audio file uploaded by the terminal that requests to use the first accompaniment file to participate in the chorus, mark the portion of the audio file that has a mixed sound and is not labeled as a part of the chorus;
  • the first labeling module 703 can include:
  • a first obtaining submodule 7031 configured to acquire an audio information location in the audio file
  • a first analysis sub-module 7032 configured to analyze a portion of the audio information location having a sound, and the portion of the sound is encoded by the collected audio information and the first accompaniment file;
  • the first mix labeling sub-module 7033 is used to mark the portion of the audio file that has a mix and is not labeled as a part of the chorus.
  • the first mix labeling sub-module 7033 can include:
  • a first change subunit configured to change a color of a display text corresponding to a portion of the audio file that has a mix and is not labeled
  • the first labeling subunit is used to mark, in the name of the audio file, the portion of the audio file that has a mix and is not marked.
  • a determining module 704 configured to determine the annotated audio file as a second accompaniment file
  • the audio file is formed by encoding the collected audio information and the downloaded first accompaniment file for the terminal requesting to use the first accompaniment file to participate in the chorus.
  • the first sending module 705 is configured to send the second accompaniment file to the end of the request to use the second accompaniment file to participate in the chorus
  • the receiving module 706 is configured to: after receiving the audio file uploaded by the terminal that requests to use the first accompaniment file to participate in the chorus, receive the volume information of the audio information that is requested to be collected by the terminal that participates in the chorus using the first accompaniment file;
  • a second sending module 707 configured to send the volume information to the terminal that requests to use the second accompaniment file to participate in the chorus, Prompt the user to use the volume for chorus.
  • the chorus can sing its own parts at different times and in different places without being cumbersome due to crowded people or the difference in size and distance of the sound, and each The user can separately process the part that he sings without affecting the part that other users sing; in addition, the embodiment of the present disclosure can also record the volume information of the user chorus of the terminal requesting to participate in the chorus, and prompt the next request to participate. The user of the chorus terminal uses the volume to perform the chorus, thereby further enhancing the effect of the chorus.
  • the device may be a device on the terminal side, which interacts with the server side.
  • the above device may include:
  • the encoding module 801 is configured to collect audio information, and encode the audio information with the first accompaniment file downloaded from the server to form an audio file;
  • the third labeling module 802 is configured to label the portion of the audio file that has a mix and is not labeled as a part of the chorus, and upload the marked audio file to the server.
  • the user's terminal can download the audio files of other users' chorus as the accompaniment files of their own chorus, so that the chorus can sing their own parts at different times and in different places without The chorus effect is poor due to crowded people or differences in size and distance.
  • the device may be a device on the terminal side, which interacts with the server side.
  • the above device may include:
  • the encoding module 901 is configured to collect audio information, and encode the audio information with the first accompaniment file downloaded from the server to form an audio file;
  • a third labeling module 902 configured to mark a portion of the audio file that has a mix and is not labeled as a part of the chorus, and upload the marked audio file to the server;
  • the third annotation module 902 can include:
  • a third obtaining sub-module 9021 configured to acquire an audio information location in the audio file
  • a third analysis sub-module 9022 configured to analyze a portion of the audio information location having a sound, and the portion of the sound is encoded by the collected audio information and the first accompaniment file;
  • the third mix labeling sub-module 9023 is used to mark the portion of the audio file that has a mix and is not labeled as a part of the chorus.
  • the third mix labeling sub-module 9023 can include:
  • a third change subunit configured to change a color of the display text corresponding to the portion of the audio file that has a mix and is not labeled
  • the record uploading module 903 is configured to upload the annotated audio file to the server, record the volume information of the collected audio information in the audio file, and upload the volume information to the server.
  • the asynchronous chorus device proposed in the embodiment of the present disclosure can record the volume information of the user chorus of the terminal requesting the chorus, and prompts the user of the terminal requesting the chorus to use the volume to perform the chorus, thereby further improving the effect of the chorus. .
  • the invention may be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • General Engineering & Computer Science (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

一种异步合唱方法和装置,以解决合唱效果差,处理过程繁琐、成本较高的问题。其中,方法包括:接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件时后,标注音频文件中具有混音且未被标注的部分作为已合唱的部分(101);将标注后的音频文件确定为第二伴奏文件(102)。由此,合唱者可以在不同时间、不同地点演唱自己的部分,而不会由于人多拥挤或者声音有大小、远近的差别导致合唱效果较差;另外,每个用户在演唱自己的部分时,可以进行多次演唱或者对自己演唱的部分进行单独处理,而不影响其他用户演唱的部分,因此不会导致由于一个用户表现不佳而重新演唱整首歌曲的情况。

Description

一种异步合唱方法和装置 本申请基于申请号为 201310210338.5、 申请日为 2013/5/30的中国专利申请提出, 并 要求该中国专利申请的优先权, 该中国专利申请的全部内容在此引入本申请作为参考。 技术领域
本公开实施例涉及网络技术领域, 特别是涉及一种异步合唱方法和装置。 背景技术
随着智能移动终端的快速发展, 其具有的功能越来越丰富。 目前, 移动终端中具有社 交 κ歌应用, 该应用内置混响和回声效果, 可以将用户的声音进行修饰美化。应用中除提 供伴奏外, 还提供了伴奏对应的歌词, K歌时可以同步显示, 并且能够像 KTV—样可以 精确到每个字, 此外, 应用中还提供了有趣的智能打分功能, 所得评分可以分享给好友。
目前, 在移动终端上 κ歌时一般为一人单独 κ歌, 唱完后提交到服务器进行保存并 展示, 其余使用该应用的用户可以播放该歌曲, 并做出评价。 如果要多人合唱, 则需要多 个用户同时对着同一个移动终端唱歌, 唱完后提交到服务器进行保存。
但是, 上述方法在合唱时, 要求多个用户同时对着同一个移动终端唱歌, 唱完后提交 到服务器处理, 而在合唱时用户的声音可能有大小、 远近的差别, 从而导致合唱的效果较 差; 并且, 合唱时如果有一个用户表现不佳, 可能该首歌曲要重新合唱, 服务器要重新对 合唱的歌曲进行处理, 过程繁琐、 成本较高。 发明内容
本公开实施例提供了一种异步合唱方法和装置, 以解决合唱效果差, 处理过程繁琐、 成本较高的问题。
根据本公开实施例的第一方面, 本公开实施例公开了一种异步合唱方法, 方法包括: 接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件后,标注所述音频文件 中具有混音且未被标注的部分作为已合唱的部分;
将标注后的音频文件确定为第二伴奏文件; 其中, 所述音频文件为所述请求使用第一 伴奏文件参与合唱的终端将采集的音频信息与下载的第一伴奏文件进行编码形成。
可选地, 方法还包括: 将所述第二伴奏文件发送至请求使用所述第二伴奏文件参与合 唱的终端。
可选地, 所述标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分, 包 括:
获取所述音频文件中的音频信息位置; 分析所述音频信息位置中具有混音的部分,所述混音的部分是由采集的音频信息与所 述第一伴奏文件进行编码形成;
标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分。
可选地, 所述标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分, 包 括:
改变所述音频文件中具有混音且未被标注的部分对应的显示文字的颜色;
和 /或,
在所述音频文件的名称中用文字标注所述音频文件中具有混音且未被标注的部分。 可选地, 所述接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件之后, 还 包括:
接收所述请求使用第一伴奏文件参与合唱的终端上传的音频文件中属于采集的音频 信息的音量信息;
将所述音量信息发送至请求使用所述第二伴奏文件参与合唱的终端,提示用户采用所 述音量进行合唱。
可选地, 所述方法还包括:
标注初始伴奏文件的用于合唱的段落,并将标注后的初始伴奏文件发送至请求使用所 述初始伴奏文件参与合唱的终端;
其中, 所述标注后的初始伴奏文件包括至少一个段落。
可选地, 所述标注初始伴奏文件的用于合唱的段落, 包括:
读取所述初始伴奏文件中每两个文字之间的时间间隔;
将所述时间间隔与预先设置的阈值进行比较;
当两个文字之间的时间间隔大于所述预先设置的阈值时,在所述两个文字之间标注为 一个段落结束。
根据本公开实施例的第二方面,本公开还公开了另一种异步合唱方法,所述方法包括: 采集音频信息,并将所述音频信息与从服务器下载的第一伴奏文件进行编码形成音频 文件;
标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分,并将标注后的音 频文件上传至服务器。
可选地, 所述标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分, 包 括:
获取所述音频文件中的音频信息位置;
分析所述音频信息位置中具有混音的部分,所述混音的部分是由采集的音频信息与所 述第一伴奏文件进行编码形成;
标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分。
可选地, 所述标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分, 包 括:
改变所述音频文件中具有混音且未被标注的部分对应的显示文字的颜色;
和 /或,
在所述音频文件的名称中用文字标注所述音频文件中具有混音且未被标注的部分。 可选地, 所述将标注后的音频文件上传至服务器之后, 还包括:
记录所述音频文件中属于采集的音频信息的音量信息,并将所述音量信息上传至服务
TO
根据本公开实施例的第三方面, 本公开还公开了一种异步合唱装置, 其特征在于, 所 述装置包括:
第一标注模块, 用于接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件 后, 标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分;
确定模块, 用于将标注后的音频文件确定为第二伴奏文件; 其中, 所述音频文件为所 述请求使用第一伴奏文件参与合唱的终端将采集的音频信息与下载的第一伴奏文件进行 编码形成。
可选地, 所述装置还包括:
第一发送模块,用于将所述第二伴奏文件发送至请求使用所述第二伴奏文件参与合唱 的终端。
可选地, 所述第一标注模块包括:
第一获取子模块, 用于获取所述音频文件中的音频信息位置;
第一分析子模块, 用于分析所述音频信息位置中具有混音的部分, 所述混音的部分是 由采集的音频信息与所述第一伴奏文件进行编码形成;
第一混音标注子模块,用于标注所述音频文件中具有混音且未被标注的部分作为已合 唱的部分。
可选地, 所述第一混音标注子模块包括:
第一改变子单元,用于改变所述音频文件中具有混音且未被标注的部分对应的显示文 字的颜色;
和 /或,
第一标注子单元,用于在所述音频文件的名称中用文字标注所述音频文件中具有混音 且未被标注的部分。
可选地, 所述装置还包括:
接收模块, 用于接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件之后, 接收所述请求使用第一伴奏文件参与合唱的终端上传的音频文件中属于采集的音频信息 的音量信息;
第二发送模块,用于将所述音量信息发送至请求使用所述第二伴奏文件参与合唱的终 端, 提示用户采用所述音量进行合唱。 可选地, 所述装置还包括:
第二标注模块, 用于标注初始伴奏文件的用于合唱的段落;
第三发送模块,用于将所述第二标注模块标注后的初始伴奏文件发送至请求使用所述 初始伴奏文件参与合唱的终端;
其中, 所述标注后的初始伴奏文件包括至少一个段落。
可选地, 所述第二标注模块包括:
读取子模块, 用于读取所述初始伴奏文件中每两个文字之间的时间间隔;
比较子模块, 用于将所述时间间隔与预先设置的阈值进行比较;
文字标注子模块, 用于当两个文字之间的时间间隔大于所述预先设置的阈值时, 在所 述两个文字之间标注为一个段落结束。
根据本公开实施例的第四方面, 本公开还公开了另一种异步合唱装置, 其特征在于, 所述装置包括:
编码模块, 用于采集音频信息, 并将所述音频信息与从服务器下载的第一伴奏文件进 行编码形成音频文件;
第三标注模块,用于标注所述音频文件中具有混音且未被标注的部分作为已合唱的部 分, 并将标注后的音频文件上传至服务器。
可选地, 所述第三标注模块包括:
第三获取子模块, 用于获取所述音频文件中的音频信息位置;
第三分析子模块, 用于分析所述音频信息位置中具有混音的部分, 所述混音的部分是 由采集的音频信息与所述第一伴奏文件进行编码形成;
第三混音标注子模块,用于标注所述音频文件中具有混音且未被标注的部分作为已合 唱的部分。
可选地, 所述第三混音标注子模块包括:
第三改变子单元,用于改变所述音频文件中具有混音且未被标注的部分对应的显示文 字的颜色;
和 /或,
第三标注子单元,用于在所述音频文件的名称中用文字标注所述音频文件中具有混音 且未被标注的部分。
可选地, 所述装置还包括:
记录上传模块, 用于将标注后的音频文件上传至服务器之后, 记录所述音频文件中属 于采集的音频信息的音量信息, 并将所述音量信息上传至服务器。
本公开的实施例提供的技术方案可以包括以下有益效果:
本公开实施例中提出的异步合唱方法中在接收到请求使用第一伴奏文件参与合唱的 终端上传的音频文件后, 标注音频文件中具有混音且未被标注的部分作为已合唱的部分; 将标注后的音频文件确定为第二伴奏文件; 其中, 音频文件为请求使用第一伴奏文件参与 合唱的终端将采集的音频信息与下载的第一伴奏文件进行编码形成。 当一个用户合唱时, 该用户的终端可以下载其他用户合唱的音频文件当作自身合唱的伴奏文件,从而合唱者可 以在不同时间、 不同地点演唱自己的部分, 而不会由于人多拥挤或者声音有大小、 远近的 差别导致合唱效果较差; 另外, 每个用户在演唱自己的部分时, 可以进行多次演唱或者对 自己演唱的部分进行单独处理, 而不影响其他用户演唱的部分, 因此不会导致由于一个用 户表现不佳而重新演唱整首歌曲的情况。
应当理解的是, 以上的一般描述和后文的细节描述仅是示例性和解释性的, 并不能限 制本发明。 附图说明
此处的附图被并入说明书中并构成本说明书的一部分, 示出了符合本发明的实施例, 并与说明书一起用于解释本发明的原理。
图 1是根据一示例性实施例一提出的一种异步合唱方法的流程图;
图 2是根据一示例性实施例二提出的一种异步合唱方法的流程图;
图 3是根据一示例性实施例三提出的一种异步合唱方法的流程图;
图 4是根据一示例性实施例四提出的一种异步合唱方法的流程图;
图 5是根据一示例性实施例五提出的标注后的初始伴奏文件的示意图;
图 6是根据一示例性实施例六提出的一种异步合唱装置的结构框图;
图 7是根据一示例性实施例七提出的一种异步合唱装置的结构框图;
图 8是根据一示例性实施例八提出的一种异步合唱装置的结构框图;
图 9是根据一示例性实施例九提出的一种异步合唱装置的结构框图。 具体实施方式
为使本发明的上述目的、特征和优点能够更加明显易懂, 下面结合附图和具体实施方 式对本发明作进一步详细的说明。
本发明所提出的异步合唱方法和装置中, 合唱者可以在不同时间、 不同地点演唱自己 的部分, 而不会由于人多拥挤或者声音有大小、 远近的差别导致合唱效果较差, 并且每个 用户可以对自己演唱的部分进行单独处理, 而不影响其他用户演唱的部分。
实施例一:
目前在进行合唱时, 要求多个用户同时对着同一个移动终端唱歌, 唱完后提交到服务 器处理, 而在合唱时用户的声音可能有大小、 远近的差别, 从而导致合唱的效果较差; 并 且, 合唱时如果有一个用户表现不佳, 可能该首歌曲要重新合唱, 服务器要重新对合唱的 歌曲进行处理, 过程繁琐、 成本较高。
针对上述问题, 本公开实施例提出了一种异步合唱方法, 该方法可以解决上述问题。 参照图 1, 示出了本公开实施例一提出的一种异步合唱方法的流程图, 该方法可以包 括:
在步骤 101中, 接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件后, 标 注音频文件中具有混音且未被标注的部分作为已合唱的部分。
本公开实施例中, 当多个用户在合唱时, 可以利用不同的终端进行, 本公开实施例提 出的终端可以为智能手机、 平板电脑等等。
当请求使用第一伴奏文件参与合唱的终端的用户进行合唱时,首先可以从服务器下载 第一伴奏文件, 然后该请求使用第一伴奏文件参与合唱的终端采集到该用户的音频信息, 可以将采集的音频信息与该请求使用第一伴奏文件参与合唱的终端下载的第一伴奏文件 进行编码, 形成该请求使用第一伴奏文件参与合唱的终端对应的音频文件, 并上传至服务 器。
服务器接收到该请求使用第一伴奏文件参与合唱的终端上传的音频文件后,可以标注 音频文件中具有混音且未被标注的部分作为已合唱的部分。其中, 上述音频文件即为请求 使用第一伴奏文件参与合唱的终端将采集的音频信息与下载的第一伴奏文件进行编码形 成。
在步骤 102中, 将标注后的伴奏文件确定为第二伴奏文件。
在对上述请求使用第一伴奏文件参与合唱的终端上传的音频文件进行标注之后,可以 将该标注后的伴奏文件确定为第二伴奏文件。当请求使用第二伴奏文件参与合唱的终端的 用户参与合唱时, 可以从服务器下载上述第二伴奏文件, 并直接利用该第二伴奏文件参与 合唱。
对于上述各个步骤的具体过程, 将在下面的实施例二中详细论述。
本公开实施例中, 当一个用户合唱时, 该用户的终端可以下载其他用户合唱的音频文 件当作自身合唱的伴奏文件, 从而合唱者可以在不同时间、 不同地点演唱自己的部分, 而 不会由于人多拥挤或者声音有大小、 远近的差别导致合唱效果较差; 另外, 每个用户在演 唱自己的部分时, 可以进行多次演唱或者对自己演唱的部分进行单独处理, 而不影响其他 用户演唱的部分, 因此不会导致由于一个用户表现不佳而重新演唱整首歌曲的情况。
实施例二:
下面, 通过本公开实施例二对上述实施例一的异步合唱方法进行详细介绍。
参照图 2, 示出了本公开实施例二提出的一种异步合唱方法的流程图, 该方法可以包 括:
在步骤 201中, 标注初始伴奏文件的用于合唱的段落, 并将标注后的初始伴奏文件发 送至请求使用初始伴奏文件参与合唱的终端。
本公开实施例中, 可以首先对初始伴奏文件进行标注, 并将标注后的初始伴奏文件发 送至请求使用初始伴奏文件参与合唱的终端。其中, 标注后的初始伴奏文件可以包括至少 一个段落。
在一个实施例中, 可以采用读取时间间隔的方式自动对初始伴奏文件进行标注。 因此 上述标注初始伴奏文件的用于合唱的段落的过程可以包括:
al, 读取初始伴奏文件中每两个文字之间的时间间隔;
a2, 将上述时间间隔与预先设置的阈值进行比较;
a3 , 当两个文字之间的时间间隔大于预先设置的阈值时, 在两个文字之间标注为一个 段落结束。
本公开实施例提出的标注, 可以为在两个文字之间标注一个特殊符号 (例如圆点) , 此时该两个文字即存在于伴奏文件中的两个段落中,该特殊符号可以作为一个段落结束的 标记; 或者区分男女对唱, 在两个文字之间标注 "男: "或者 "女: " , 此时, 该两个文 字存在于伴奏文件中的两个段落中, 该标注 "男: "或者 "女: "可以作为一个段落结束 的标记。
当然, 还可以采用其他方式对初始伴奏文件进行标注, 例如采用不同颜色标注等等, 本公开实施例对此并不加以限制。
例如, 用上述标注 "男: "或者 "女: " 的方式, 对于下面一段歌词可以标注如下: "女: 明明白白我的心
渴望一份真感情
曾经为爱伤透了心
为什么甜蜜的梦容易醒
男: 你有一双温柔的眼睛
你有善解人意的心灵
如果你愿意请让我靠近
我想你会明白我的心" 。
通过对每两个文字之间的时间间隔进行判断来标注伴奏文件, 可以使标注更加准确。 对于上述预先设置的阈值的具体数值, 本领域技术人员根据实际经验进行设定即可。
当然, 本公开实施例也可以采用其他方式对初始伴奏文件进行标注, 例如, 根据伴奏 的音调的高低进行标注等等, 本公开实施例对此并不加以限制。
在步骤 202中, 接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件后, 标 注音频文件中具有混音且未被标注的部分作为已合唱的部分。
本公开实施例中, 对于一首歌曲, 可以由多个用户通过不同的终端进行合唱。 每个请 求参与合唱的终端的用户在合唱时, 该请求参与合唱的终端可以采集用户的音频信息, 然 后将该音频信息与参与合唱的终端下载的伴奏文件进行编码, 形成音频文件, 最后可以将 编码形成的音频文件上传至服务器。
本公开实施例中, 服务器在接收到请求参与合唱的终端上传的音频文件后, 可以标注 音频文件中具有混音且未被标注的部分作为已合唱的部分。
上述标注音频文件中具有混音且未被标注的部分作为已合唱的部分的过程可以包括: bl, 获取音频文件中的音频信息位置; hi, 分析音频信息位置中具有混音的部分, 混音的部分是由采集的音频信息与第一伴 奏文件进行编码形成;
b3, 标注音频文件中具有混音且未被标注的部分作为已合唱的部分。
本公开实施例中, 对于音频文件中具有混音且未被标注的部分可以采用以下方式标 注:
改变音频文件中具有混音且未被标注的部分对应的显示文字的颜色(例如, 音频文件 中没有混音的部分对应的显示文字颜色为黑色,则可以将音频文件中具有混音且未被标注 的部分对应的显示文字的颜色标注为红色); 或者在音频文件的名称中用文字标注音频文 件中具有混音且未被标注的部分(例如, 可以在音频文件的名称中用文字说明已唱了哪一 部分) 。 当然, 也可以同时对音频文件中具有混音且未被标注的部分进行上述两种标注, 本公开实施例对此并不加以限制。
本公开实施例中,还可以采用其他的方式标注音频文件中具有混音且未被标注的部分 作为已合唱的部分, 例如, 将具有混音且未被标注的部分对应的文字加粗等等, 本公开实 施例对此并不加以限制。
在步骤 203中, 将标注后的音频文件确定为第二伴奏文件。
对音频文件标注之后, 即可将该标注后的音频文件确定为第二伴奏文件, 请求使用第 二伴奏文件参与合唱的终端的用户即可按照该第二伴奏文件合唱。
在步骤 204中, 将第二伴奏文件发送至请求使用第二伴奏文件参与合唱的终端。 当请求使用第二伴奏文件参与合唱的终端请求合唱时,服务器可以将确定后的第二伴 奏文件发送至该请求使用第二伴奏文件参与合唱的终端。由于初始伴奏文件中标注出不同 的段落, 因此该请求使用第二伴奏文件参与合唱的终端的用户可以按照初始伴奏文件中的 标注合唱对应于自身的段落,并且按照下载的第二伴奏文件中标注的已合唱的部分进行合 唱。
当请求使用第二伴奏文件参与合唱的终端采集到用户的音频信息后,可以将采集的音 频信息与该第二伴奏文件进行编码, 生成音频文件, 并将该音频文件上传至服务器, 然后 对该请求使用第二伴奏文件参与合唱的终端上传的音频文件进行标注, 重复执行上述过 程。
在步骤 205中,接收请求使用第一伴奏文件参与合唱的终端上传的音频文件中属于采 集的音频信息的音量信息。
为了进一步提高合唱的效果, 本公开实施例中还可以采用音量提醒的方式。 当某一个 请求参与合唱的终端的用户在合唱时, 该终端可以记录所采集的音频信息的音量信息, 然 后将所采集的音频信息的音量信息上传至服务器。
因此, 本公开实施例中, 当接收到请求使用第一伴奏文件参与合唱的终端上传的音频 文件之后,还可以接收请求使用第一伴奏文件参与合唱的终端上传的音频文件中属于采集 的音频信息的音量信息。 在步骤 206中, 将音量信息发送至请求使用第二伴奏文件参与合唱的终端, 提示用户 采用音量进行合唱。
服务器接收到上述请求使用第一伴奏文件参与合唱的终端上传的音量信息之后,可以 将音量信息发送至请求使用第二伴奏文件参与合唱的终端,从而可以提示该终端的用户采 用上述音量进行合唱。
由于具有音量提示, 因此请求使用第二伴奏文件参与合唱的终端的用户即可按照请求 使用第一伴奏文件参与合唱的终端的用户的音量调整自己的合唱音量,从而进一步提高合 唱的效果。
上述步骤 205-步骤 206也可以在步骤 203之前执行, 或者, 步骤 203和步骤 205可以 并列执行, 步骤 204和步骤 206可以并列执行, 等等, 本公开实施例对上述步骤的具体顺 序并不加以限制。
本公开实施例所提出的异步合唱方法中, 合唱者可以在不同时间、 不同地点演唱自己 的部分, 而不会由于人多拥挤或者声音有大小、 远近的差别导致合唱效果较差, 并且每个 用户可以对自己演唱的部分进行单独处理, 而不影响其他用户演唱的部分; 另外, 本公开 实施例还可以记录上一个请求参与合唱的终端的用户合唱时的音量信息,并提示下一个请 求参与合唱的终端的用户采用该音量进行合唱, 从而可以进一步提高合唱的效果。
上述实施例一和实施例二主要是从服务器侧对异步合唱方法进行介绍,下面通过实施 例三和实施例四从终端侧进行介绍。
实施例三:
参照图 4, 示出了本公开实施例三提出的一种异步合唱方法的流程图, 该方法可以包 括:
在步骤 301中, 采集音频信息, 并将音频信息与从服务器下载的第一伴奏文件进行编 码形成音频文件。
当请求使用第一伴奏文件参与合唱的终端请求参与合唱时,首先可以从服务器下载第 一伴奏文件, 然后用户在进行合唱时, 该终端可以采集用户的音频信息, 然后将音频信息 与从服务器下载的第一伴奏文件进行编码形成音频文件。
在步骤 302中, 标注音频文件中具有混音且未被标注的部分作为已合唱的部分, 并将 标注后的音频文件上传至服务器。
终端在编码形成音频文件之后,可以标注该音频文件中具有混音且未被标注的部分作 为已合唱的部分, 并将标注后的音频文件上传至服务器。
服务器接收到标注后的音频文件后, 可以将该标注后的音频文件作为第二伴奏文件, 请求使用第二伴奏文件参与合唱的终端可以从服务器下载第二伴奏文件,并直接利用该第 二伴奏文件参与合唱。
对于上述各个步骤的具体过程, 将在下面的实施例四中详细论述。
本公开实施例中, 当一个用户合唱时, 该用户的终端可以下载其他用户合唱的音频文 件当作自身合唱的伴奏文件, 从而合唱者可以在不同时间、 不同地点演唱自己的部分, 而 不会由于人多拥挤或者声音有大小、 远近的差别导致合唱效果较差。
实施例四:
下面, 通过本公开实施例四对上述实施例三的异步合唱方法进行详细介绍。
参照图 4, 示出了本公开实施例四提出的一种异步合唱方法的流程图, 该方法可以包 括:
在步骤 401中, 采集音频信息, 并将音频信息与从服务器下载的第一伴奏文件进行编 码形成音频文件。
请求使用第一伴奏文件参与合唱的终端可以采集合唱的用户的音频信息,然后将音频 信息与从服务器下载的第一伴奏文件进行编码形成音频文件。
在步骤 402中, 标注音频文件中具有混音且未被标注的部分作为已合唱的部分, 并将 标注后的音频文件上传至服务器。
本公开实施例中, 上述标注过程可以通过请求参与合唱的终端实现。 上述标注音频文 件中具有混音且未被标注的部分作为已合唱的部分的过程可以包括:
cl, 获取音频文件中的音频信息位置;
c2, 分析音频信息位置中具有混音的部分, 混音的部分是由采集的音频信息与第一伴 奏文件进行编码形成;
c3 , 标注音频文件中具有混音且未被标注的部分作为已合唱的部分。
本公开实施例中, 上述 c3 中标注音频文件中具有混音且未被标注的部分作为已合唱 的部分可以通过以下方式实现:
改变音频文件中具有混音且未被标注的部分对应的显示文字的颜色; 或者, 在音频文 件的名称中用文字标注音频文件中具有混音且未被标注的部分。
当然,本公开实施也可以同时对音频文件中具有混音且未被标注的部分进行上述两种 标注。
在步骤 403中, 记录音频文件中属于采集的音频信息的音量信息, 并将音量信息上传 至服务器。
为了进一步提高合唱的效果, 本公开实施例中还可以采用音量提醒的方式。 当请求使 用第一伴奏文件参与合唱的终端的用户在合唱时,该终端可以记录所采集的音频信息的音 量信息, 然后将所采集的音频信息的音量信息上传至服务器。
服务器在接收到上述请求使用第一伴奏文件参与合唱的终端上传的音频文件以及音 量信息之后, 可以将上述标注后的音频文件作为第二伴奏文件, 请求使用第二伴奏文件的 终端的用户在参与合唱时, 可以直接从服务器下载该第二伴奏文件进行合唱, 并且可以同 时获取上述音量信息, 以提示用户采用该音量进行合唱。
本公开实施例中的上述各个过程并不限定于请求使用第一伴奏文件参与合唱的终端 执行, 任何一个终端都是可以执行的。 本公开实施例中可以记录上一个请求参与合唱的终端的用户合唱时的音量信息,并提 示下一个请求参与合唱的终端的用户采用该音量进行合唱,从而可以进一步提高合唱的效 果。
实施例五:
下面, 通过一个公开实例, 对上述异步合唱方法进行介绍。
首先, 通过服务器对初始伴奏文件进行标注, 标注后的初始伴奏文件如图 5所示。 从 图 5可以看出, 标注后的初始伴奏文件中可以包括 A、 B、 C三个部分, 可以由三个终端 的用户分别演唱这三部分, 例如, 可以由用户 A的终端演唱标注 A的部分, 由用户 B的 终端演唱标注 B的部分, 由用户 C的终端演唱标注 C的部分。 对于标注的具体过程, 将 在下面对服务器侧的描述中说明。
下面, 分别针对终端侧和服务器侧对本公开实施例提出的异步合唱方法进行说明。 终端侧:
1、 用户 A的终端从服务器下载并播放上述标注后的初始伴奏文件, 用户 A演唱其中 标注 A的部分。 用户 A的终端采集用户 A的音频信息, 并记录用户 A的音频信息的音量 信息, 用户 A的终端将采集的音频信息和上述初始伴奏文件进行编码, 生成歌曲 XI (即 音频文件) , 并通过用户 A的终端将歌曲 XI和上述用户 A的音量信息上传至服务器。
2、 用户 B的终端从服务器下载并播放歌曲 XI, 将其作为伴奏使用, 并按照上述用户 A的音量信息提示用户 B演唱时的音量 (例如, 采用波形形式进行提示) , 用户 B继续 演唱其中标注 B的部分。 用户 B的终端采集用户 B的音频信息, 并记录用户 B的音频信 息的音量信息, 用户 B的终端将采集的音频信息和上述歌曲 XI进行编码, 生成歌曲 X2, 并通过用户 B的终端将歌曲 X2和上述用户 B的音量信息上传至服务器。
3、 用户 C的终端从服务器下载并播放歌曲 X2, 将其作为伴奏使用, 并按照上述用户 B的音量信息提示用户 C演唱时的音量, 用户 C继续演唱其中标注 C的部分。 用户 C的 终端采集用户 C的音频信息, 并记录用户 C的音频信息的音量信息, 用户 C的终端将采 集的音频信息和上述歌曲 X2进行编码, 生成歌曲 X3, 并通过用户 C的终端将歌曲 X3和 上述用户 C的音量信息上传至服务器, 从而完成整首歌曲。
服务器侧:
其中主要包括以下两个部分:
一、 标注初始伴奏文件, 即为初始伴奏文件标注不同部分。
例如, 可以采用手动标注初始伴奏文件, 或者按照歌曲中歌词的间隔时间做标注。 当 然, 还可以采用其他方式对初始伴奏文件进行标注, 本公开实施例对此并不加以限制。 对 于标注的具体过程,参照上述实施例二的相关描述即可,本公开实施例在此不再详细论述。
本公开实施例中, 可以将初始文件标注为三部分, 如图 3所示, 即标注为部分 A、 部 分 B和部分 C。
二、 处理终端上传的歌曲 (即音频文件) i, 接收到终端上传的歌曲时, 根据该歌曲的音频信息位置, 确定该歌曲的已唱部分。 例如, 可以分析该歌曲中具有混音的部分, 对于具体的过程, 参照上述实施例二的相 关描述即可。
ii, 根据初始伴奏文件的标注以及接收到的上传歌曲中确定的已唱部分, 对该上传歌 曲进一步标注。
以歌曲 XI为例, 可以采用不同颜色标注其中已唱部分 A的歌词, 或者在歌曲 XI的 名称上标注出已唱了部分 A, 还可以同时对歌词的颜色和歌曲名称进行标注。
iii, 将标注后的歌曲以及当前终端上传的音量信息发送至下一个请求的终端。
本公开实施例中, 上述标注过程是通过服务器实现的, 需要说明的是, 该标注过程也 可以通过终端实现, 本公开实施例对此并不加以限制。
本公开实施例具有以下有益效果:
( 1 ) 合唱者可以在不同的时间, 不同的地点演唱自己的部分。
(2) 每个用户在自己演唱自己的部分时可以进行多次演唱或者对自己的声音单独处 理, 而不影响其他用户 (例如, 可以提供混响或其它声音效果, 该用户自己唱的部分就有 特殊的效果而其他用户的声音不变) 。
(3 ) 不会由于多人拥挤导致合唱效果欠佳。
对于前述的方法实施例, 为了简单描述, 故将其都表述为一系列的动作组合, 但是本 领域技术人员应该知悉, 本公开并不受所描述的动作顺序的限制, 因为依据本公开, 某些 步骤可以采用其他顺序或者同时进行。 其次, 本领域技术人员也应该知悉, 说明书中所描 述的实施例均属于优选实施例, 所涉及的动作并不一定是本公开所必需的。
实施例六:
参照图 6, 示出了本公开实施例六提出的一种异步合唱装置的结构框图, 该装置可以 为服务器侧的装置, 其与终端侧进行交互。
上述装置可以包括:
第一标注模块 601, 用于接收到请求参与合唱的终端上传的音频文件后, 标注音频文 件中具有混音且未被标注的部分作为已合唱的部分;
确定模块 602, 用于将标注后的音频文件确定为第二伴奏文件。
其中,音频文件为请求使用第一伴奏文件参与合唱的终端将采集的音频信息与下载的 第一伴奏文件进行编码形成。
本公开实施例中, 当一个用户合唱时, 该用户的终端可以下载其他用户合唱的音频文 件当作自身合唱的伴奏文件, 从而合唱者可以在不同时间、 不同地点演唱自己的部分, 而 不会由于人多拥挤或者声音有大小、 远近的差别导致合唱效果较差; 另外, 每个用户在演 唱自己的部分时, 可以进行多次演唱或者对自己演唱的部分进行单独处理, 而不影响其他 用户演唱的部分, 因此不会导致由于一个用户表现不佳而重新演唱整首歌曲的情况。
实施例七: 参照图 7, 示出了本公开实施例七提出的一种异步合唱装置的结构框图, 该装置可以 为服务器侧的装置, 其与终端侧进行交互。
该装置可以包括:
第二标注模块 701, 用于标注初始伴奏文件的用于合唱的段落;
第二标注模块 701可以包括:
读取子模块 7011, 用于读取初始伴奏文件中每两个文字之间的时间间隔; 比较子模块 7012, 用于将时间间隔与预先设置的阈值进行比较;
文字标注子模块 7013,用于当两个文字之间的时间间隔大于预先设置的阈值时,在两 个文字之间标注为一个段落结束。
第三发送模块 702, 用于将第二标注模块标注后的初始伴奏文件发送至请求使用初始 伴奏文件参与合唱的终端;
其中, 标注后的初始伴奏文件包括至少一个段落。
第一标注模块 703, 用于接收到请求使用第一伴奏文件参与合唱的终端上传的音频文 件后, 标注音频文件中具有混音且未被标注的部分作为已合唱的部分;
第一标注模块 703可以包括:
第一获取子模块 7031, 用于获取音频文件中的音频信息位置;
第一分析子模块 7032,用于分析音频信息位置中具有混音的部分,混音的部分是由采 集的音频信息与第一伴奏文件进行编码形成;
第一混音标注子模块 7033,用于标注音频文件中具有混音且未被标注的部分作为已合 唱的部分。
第一混音标注子模块 7033可以包括:
第一改变子单元,用于改变音频文件中具有混音且未被标注的部分对应的显示文字的 颜色;
和 /或,
第一标注子单元,用于在音频文件的名称中用文字标注音频文件中具有混音且未被标 注的部分。
确定模块 704, 用于将标注后的音频文件确定为第二伴奏文件;
其中,音频文件为请求使用第一伴奏文件参与合唱的终端将采集的音频信息与下载的 第一伴奏文件进行编码形成。
第一发送模块 705, 用于将第二伴奏文件发送至请求使用第二伴奏文件参与合唱的终
¾ ;
接收模块 706, 用于接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件之 后,接收请求使用第一伴奏文件参与合唱的终端上传的音频文件中属于采集的音频信息的 音量信息;
第二发送模块 707, 用于将音量信息发送至请求使用第二伴奏文件参与合唱的终端, 提示用户采用音量进行合唱。
本公开实施例所提出的异步合唱装置中, 合唱者可以在不同时间、 不同地点演唱自己 的部分, 而不会由于人多拥挤或者声音有大小、 远近的差别导致合唱效果较差, 并且每个 用户可以对自己演唱的部分进行单独处理, 而不影响其他用户演唱的部分; 另外, 本公开 实施例还可以记录上一个请求参与合唱的终端的用户合唱时的音量信息,并提示下一个请 求参与合唱的终端的用户采用该音量进行合唱, 从而可以进一步提高合唱的效果。
实施例八:
参照图 8, 示出了本公开实施例八提出的一种异步合唱装置的结构框图, 该装置可以 为终端侧的装置, 其与服务器侧进行交互。
上述装置可以包括:
编码模块 801, 用于采集音频信息, 并将音频信息与从服务器下载的第一伴奏文件进 行编码形成音频文件;
第三标注模块 802, 用于标注音频文件中具有混音且未被标注的部分作为已合唱的部 分, 并将标注后的音频文件上传至服务器。
本公开实施例中, 当一个用户合唱时, 该用户的终端可以下载其他用户合唱的音频文 件当作自身合唱的伴奏文件, 从而合唱者可以在不同时间、 不同地点演唱自己的部分, 而 不会由于人多拥挤或者声音有大小、 远近的差别导致合唱效果较差。
实施例九:
参照图 9, 示出了本公开实施例九提出的一种异步合唱装置的结构框图, 该装置可以 为终端侧的装置, 其与服务器侧进行交互。
上述装置可以包括:
编码模块 901, 用于采集音频信息, 并将音频信息与从服务器下载的第一伴奏文件进 行编码形成音频文件;
第三标注模块 902, 用于标注音频文件中具有混音且未被标注的部分作为已合唱的部 分, 并将标注后的音频文件上传至服务器;
第三标注模块 902可以包括:
第三获取子模块 9021, 用于获取音频文件中的音频信息位置;
第三分析子模块 9022,用于分析音频信息位置中具有混音的部分,混音的部分是由采 集的音频信息与第一伴奏文件进行编码形成;
第三混音标注子模块 9023,用于标注音频文件中具有混音且未被标注的部分作为已合 唱的部分。
第三混音标注子模块 9023可以包括:
第三改变子单元,用于改变音频文件中具有混音且未被标注的部分对应的显示文字的 颜色;
和 /或, 第三标注子单元,用于在音频文件的名称中用文字标注音频文件中具有混音且未被标 注的部分。
记录上传模块 903, 用于将标注后的音频文件上传至服务器之后, 记录音频文件中属 于采集的音频信息的音量信息, 并将音量信息上传至服务器。
本公开实施例提出的异步合唱装置可以记录上一个请求参与合唱的终端的用户合唱 时的音量信息, 并提示下一个请求参与合唱的终端的用户采用该音量进行合唱, 从而可以 进一步提高合唱的效果。
对于上述各装置实施例而言, 由于其与方法实施例基本相似, 所以描述的比较简单, 相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他 实施例的不同之处, 各个实施例之间相同相似的部分互相参见即可。
本领域技术人员易于想到的是: 上述各个实施例的任意组合应用都是可行的, 故上述 各个实施例之间的任意组合都是本发明的实施方案, 但是由于篇幅限制, 本说明书在此就 不一一详述了。
本发明可以在由计算机执行的计算机可执行指令的一般上下文中描述, 例如程序模 块。 一般地, 程序模块包括执行特定任务或实现特定抽象数据类型的例程、 程序、 对象、 组件、数据结构等等。也可以在分布式计算环境中实践本发明,在这些分布式计算环境中, 由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中, 程序模块可 以位于包括存储设备在内的本地和远程计算机存储介质中。
最后, 还需要说明的是, 在本文中, 诸如第一和第二等之类的关系术语仅仅用来将一 个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之 间存在任何这种实际的关系或者顺序。 而且, 术语 "包括"、 "包含"或者其任何其他变 体意在涵盖非排他性的包含, 从而使得包括一系列要素的过程、 方法、 商品或者设备不仅 包括那些要素, 而且还包括没有明确列出的其他要素, 或者是还包括为这种过程、 方法、 商品或者设备所固有的要素。 在没有更多限制的情况下, 由语句 "包括一个…… "限定的 要素, 并不排除在包括要素的过程、 方法、 商品或者设备中还存在另外的相同要素。
以上对本发明所提供的一种异步合唱方法和装置, 进行了详细介绍, 本文中应用了具 体个例对本发明的原理及实施方式进行了阐述, 以上实施例的说明只是用于帮助理解本发 明的方法及其核心思想; 同时, 对于本领域的一般技术人员, 依据本发明的思想, 在具体 实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本发明的限制。

Claims

权利要求
1、 一种异步合唱方法, 其特征在于, 所述方法包括:
接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件后,标注所述音频文件 中具有混音且未被标注的部分作为已合唱的部分;
将标注后的音频文件确定为第二伴奏文件;其中,所述音频文件为所述请求使用第一 伴奏文件参与合唱的终端将采集的音频信息与下载的第一伴奏文件进行编码形成。
2、 根据权利要求 1所述的方法, 其特征在于, 还包括: 将所述第二伴奏文件发送至 请求使用所述第二伴奏文件参与合唱的终端。
3、 根据权利要求 1所述的方法, 其特征在于, 所述标注所述音频文件中具有混音且 未被标注的部分作为已合唱的部分, 包括:
获取所述音频文件中的音频信息位置;
分析所述音频信息位置中具有混音的部分,所述混音的部分是由采集的音频信息与所 述第一伴奏文件进行编码形成;
标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分。
4、 根据权利要求 3所述的方法, 其特征在于, 所述标注所述音频文件中具有混音且 未被标注的部分作为已合唱的部分, 包括:
改变所述音频文件中具有混音且未被标注的部分对应的显示文字的颜色; 和 /或,
在所述音频文件的名称中用文字标注所述音频文件中具有混音且未被标注的部分。
5、 根据权利要求 1或 3所述的方法, 其特征在于, 所述接收到请求使用第一伴奏文 件参与合唱的终端上传的音频文件之后, 还包括:
接收所述请求使用第一伴奏文件参与合唱的终端上传的音频文件中属于采集的音频 信息的音量信息;
将所述音量信息发送至请求使用所述第二伴奏文件参与合唱的终端,提示用户采用所 述音量进行合唱。
6、 根据权利要求 1所述的方法, 其特征在于, 所述方法还包括:
标注初始伴奏文件的用于合唱的段落,并将标注后的初始伴奏文件发送至请求使用所 述初始伴奏文件参与合唱的终端;
其中, 所述标注后的初始伴奏文件包括至少一个段落。
7、 根据权利要求 6所述的方法, 其特征在于, 所述标注初始伴奏文件的用于合唱的 段落, 包括:
读取所述初始伴奏文件中每两个文字之间的时间间隔;
将所述时间间隔与预先设置的阈值进行比较;
当两个文字之间的时间间隔大于所述预先设置的阈值时,在所述两个文字之间标注为 一个段落结束。
8、 一种异步合唱方法, 其特征在于, 所述方法包括:
采集音频信息,并将所述音频信息与从服务器下载的第一伴奏文件进行编码形成音频 文件;
标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分,并将标注后的音 频文件上传至服务器。
9、 根据权利要求 8所述的方法, 其特征在于, 所述标注所述音频文件中具有混音且 未被标注的部分作为已合唱的部分, 包括:
获取所述音频文件中的音频信息位置;
分析所述音频信息位置中具有混音的部分,所述混音的部分是由采集的音频信息与所 述第一伴奏文件进行编码形成;
标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分。
10、根据权利要求 9所述的方法, 其特征在于, 所述标注所述音频文件中具有混音且 未被标注的部分作为已合唱的部分, 包括:
改变所述音频文件中具有混音且未被标注的部分对应的显示文字的颜色; 和 /或,
在所述音频文件的名称中用文字标注所述音频文件中具有混音且未被标注的部分。
11、根据权利要求 8或 9所述的方法, 其特征在于, 所述将标注后的音频文件上传至 服务器之后, 还包括:
记录所述音频文件中属于采集的音频信息的音量信息,并将所述音量信息上传至服务 器。
12、 一种异步合唱装置, 其特征在于, 所述装置包括:
第一标注模块, 用于接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件 后, 标注所述音频文件中具有混音且未被标注的部分作为已合唱的部分;
确定模块, 用于将标注后的音频文件确定为第二伴奏文件; 其中, 所述音频文件为所 述请求使用第一伴奏文件参与合唱的终端将采集的音频信息与下载的第一伴奏文件进行 编码形成。
13、 根据权利要求 12所述的装置, 其特征在于, 所述装置还包括:
第一发送模块,用于将所述第二伴奏文件发送至请求使用所述第二伴奏文件参与合唱 的终端。
14、 根据权利要求 12所述的装置, 其特征在于, 所述第一标注模块包括: 第一获取子模块, 用于获取所述音频文件中的音频信息位置;
第一分析子模块,用于分析所述音频信息位置中具有混音的部分,所述混音的部分是 由采集的音频信息与所述第一伴奏文件进行编码形成;
第一混音标注子模块,用于标注所述音频文件中具有混音且未被标注的部分作为已合 唱的部分。
15、 根据权利要求 14所述的装置, 其特征在于, 所述第一混音标注子模块包括: 第一改变子单元,用于改变所述音频文件中具有混音且未被标注的部分对应的显示文 字的颜色;
和 /或,
第一标注子单元,用于在所述音频文件的名称中用文字标注所述音频文件中具有混音 且未被标注的部分。
16、 根据权利要求 12或 14所述的装置, 其特征在于, 所述装置还包括:
接收模块, 用于接收到请求使用第一伴奏文件参与合唱的终端上传的音频文件之后, 接收所述请求使用第一伴奏文件参与合唱的终端上传的音频文件中属于采集的音频信息 的音量信息;
第二发送模块,用于将所述音量信息发送至请求使用所述第二伴奏文件参与合唱的终 端, 提示用户采用所述音量进行合唱。
17、 根据权利要求 12所述的装置, 其特征在于, 所述装置还包括:
第二标注模块, 用于标注初始伴奏文件的用于合唱的段落;
第三发送模块,用于将所述第二标注模块标注后的初始伴奏文件发送至请求使用所述 初始伴奏文件参与合唱的终端;
其中, 所述标注后的初始伴奏文件包括至少一个段落。
18、 根据权利要求 17所述的装置, 其特征在于, 所述第二标注模块包括: 读取子模块, 用于读取所述初始伴奏文件中每两个文字之间的时间间隔;
比较子模块, 用于将所述时间间隔与预先设置的阈值进行比较;
文字标注子模块,用于当两个文字之间的时间间隔大于所述预先设置的阈值时,在所 述两个文字之间标注为一个段落结束。
19、 一种异步合唱装置, 其特征在于, 所述装置包括:
编码模块,用于采集音频信息,并将所述音频信息与从服务器下载的第一伴奏文件进 行编码形成音频文件;
第三标注模块,用于标注所述音频文件中具有混音且未被标注的部分作为已合唱的部 分, 并将标注后的音频文件上传至服务器。
20、 根据权利要求 19所述的装置, 其特征在于, 所述第三标注模块包括: 第三获取子模块, 用于获取所述音频文件中的音频信息位置;
第三分析子模块,用于分析所述音频信息位置中具有混音的部分,所述混音的部分是 由采集的音频信息与所述第一伴奏文件进行编码形成;
第三混音标注子模块,用于标注所述音频文件中具有混音且未被标注的部分作为已合 唱的部分。
21、 根据权利要求 20所述的装置, 其特征在于, 所述第三混音标注子模块包括: 第三改变子单元,用于改变所述音频文件中具有混音且未被标注的部分对应的显示文 字的颜色;
和 /或,
第三标注子单元,用于在所述音频文件的名称中用文字标注所述音频文件中具有混音 且未被标注的部分。
22、 根据权利要求 19或 20所述的装置, 其特征在于, 所述装置还包括: 记录上传模块,用于将标注后的音频文件上传至服务器之后,记录所述音频文件中属 于采集的音频信息的音量信息, 并将所述音量信息上传至服务器。
PCT/CN2014/072300 2013-05-30 2014-02-20 一种异步合唱方法和装置 WO2014190786A1 (zh)

Priority Applications (7)

Application Number Priority Date Filing Date Title
EP14804158.5A EP3007163B1 (en) 2013-05-30 2014-02-20 Asynchronous chorus method and device
KR1020157013606A KR101686632B1 (ko) 2013-05-30 2014-02-20 비동기 코러스 방법, 장치, 프로그램 및 기록매체
BR112015015358-5A BR112015015358B1 (pt) 2013-05-30 2014-02-20 Método e dispositivo para coro assíncrono
RU2015121498A RU2635835C2 (ru) 2013-05-30 2014-02-20 Способ и устройство для асинхронного хорового исполнения
JP2015543298A JP6085036B2 (ja) 2013-05-30 2014-02-20 非同期合唱方法、非同期合唱装置、プログラム及び記録媒体
MX2015007251A MX361534B (es) 2013-05-30 2014-02-20 Método y dispositivo de coro asíncrono.
US14/296,801 US9224374B2 (en) 2013-05-30 2014-06-05 Methods and devices for audio processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310210338.5A CN103295568B (zh) 2013-05-30 2013-05-30 一种异步合唱方法和装置
CN201310210338.5 2013-05-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/296,801 Continuation US9224374B2 (en) 2013-05-30 2014-06-05 Methods and devices for audio processing

Publications (1)

Publication Number Publication Date
WO2014190786A1 true WO2014190786A1 (zh) 2014-12-04

Family

ID=49096329

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/072300 WO2014190786A1 (zh) 2013-05-30 2014-02-20 一种异步合唱方法和装置

Country Status (8)

Country Link
EP (1) EP3007163B1 (zh)
JP (1) JP6085036B2 (zh)
KR (1) KR101686632B1 (zh)
CN (1) CN103295568B (zh)
BR (1) BR112015015358B1 (zh)
MX (1) MX361534B (zh)
RU (1) RU2635835C2 (zh)
WO (1) WO2014190786A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3306606A4 (en) * 2015-05-27 2019-01-16 Guangzhou Kugou Computer Technology Co., Ltd. METHOD, APPARATUS AND SYSTEM FOR AUDIO PROCESSING
CN110660376A (zh) * 2019-09-30 2020-01-07 腾讯音乐娱乐科技(深圳)有限公司 音频处理方法、装置及存储介质

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9224374B2 (en) 2013-05-30 2015-12-29 Xiaomi Inc. Methods and devices for audio processing
CN103295568B (zh) * 2013-05-30 2015-10-14 小米科技有限责任公司 一种异步合唱方法和装置
CN105023559A (zh) 2015-05-27 2015-11-04 腾讯科技(深圳)有限公司 K歌处理方法及系统
CN105006234B (zh) * 2015-05-27 2018-06-29 广州酷狗计算机科技有限公司 一种k歌处理方法及装置
CN106486128B (zh) * 2016-09-27 2021-10-22 腾讯科技(深圳)有限公司 一种双音源音频数据的处理方法及装置
CN106601220A (zh) * 2016-12-08 2017-04-26 天脉聚源(北京)传媒科技有限公司 一种录制多人轮唱的方法及装置
CN106686431B (zh) * 2016-12-08 2019-12-10 杭州网易云音乐科技有限公司 一种音频文件的合成方法和设备
CN108630240B (zh) * 2017-03-23 2020-05-26 北京小唱科技有限公司 一种合唱方法及装置
CN107993637B (zh) * 2017-11-03 2021-10-08 厦门快商通信息技术有限公司 一种卡拉ok歌词分词方法与系统
CN108109652A (zh) * 2017-12-27 2018-06-01 北京酷我科技有限公司 一种k歌合唱录音的方法
CN113039573A (zh) * 2018-06-29 2021-06-25 思妙公司 具有种子/加入机制的视听协作系统和方法
CN109147746B (zh) * 2018-07-27 2021-07-16 维沃移动通信有限公司 一种k歌方法及终端
US11693616B2 (en) * 2019-08-25 2023-07-04 Smule, Inc. Short segment generation for user engagement in vocal capture applications
CN111326132B (zh) 2020-01-22 2021-10-22 北京达佳互联信息技术有限公司 音频处理方法、装置、存储介质及电子设备
CN111462767B (zh) * 2020-04-10 2024-01-09 全景声科技南京有限公司 音频信号的增量编码方法及装置
CN112312163B (zh) * 2020-10-30 2024-05-28 北京字跳网络技术有限公司 视频生成方法、装置、电子设备及存储介质
CN116704978A (zh) * 2022-02-28 2023-09-05 北京字跳网络技术有限公司 音乐生成方法、装置、设备、存储介质及程序

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5235124A (en) * 1991-04-19 1993-08-10 Pioneer Electronic Corporation Musical accompaniment playing apparatus having phoneme memory for chorus voices
JP2005010639A (ja) * 2003-06-20 2005-01-13 Yamaha Corp カラオケ装置
CN101345047A (zh) * 2007-07-12 2009-01-14 英业达股份有限公司 人声自动校正的混音系统及其混音方法
CN102456340A (zh) * 2010-10-19 2012-05-16 盛大计算机(上海)有限公司 基于互联网的卡拉ok对唱方法及系统
TW201228290A (en) * 2010-12-28 2012-07-01 Tse-Ming Chang Networking multi-person asynchronous chorus audio/video works system
CN103295568A (zh) * 2013-05-30 2013-09-11 北京小米科技有限责任公司 一种异步合唱方法和装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3241372B2 (ja) * 1990-11-27 2001-12-25 パイオニア株式会社 カラオケ演奏方法
JP2006195215A (ja) * 2005-01-14 2006-07-27 Sony Ericsson Mobilecommunications Japan Inc 通信端末装置、及び演奏システム
JP4431507B2 (ja) * 2005-01-31 2010-03-17 株式会社第一興商 カラオケシステム
US20070163428A1 (en) * 2006-01-13 2007-07-19 Salter Hal C System and method for network communication of music data
JP4382786B2 (ja) * 2006-08-22 2009-12-16 株式会社タイトー 音声ミックスダウン装置、音声ミックスダウンプログラム
US20080184870A1 (en) * 2006-10-24 2008-08-07 Nokia Corporation System, method, device, and computer program product providing for a multiple-lyric karaoke system
JP2009031549A (ja) * 2007-07-27 2009-02-12 Yamaha Corp メロディ表示制御装置及びカラオケ装置
JP5014073B2 (ja) * 2007-11-12 2012-08-29 ヤマハ株式会社 メロディ表示制御装置及びカラオケ装置
DE102008008388A1 (de) * 2008-02-09 2009-08-13 Cambiz Seyed-Asgari Mehrspuraufzeichnungs- und Wiedergabesystem zur räumlich und zeitlich unabhängigen Aufzeichnung und Wiedergabe mehrspuriger medialer Inhalte unterschiedlicher Art
JP2010014823A (ja) * 2008-07-01 2010-01-21 Nippon Telegr & Teleph Corp <Ntt> 楽曲情報制御装置
WO2010041147A2 (en) * 2008-10-09 2010-04-15 Futureacoustic A music or sound generation system
US20110126103A1 (en) * 2009-11-24 2011-05-26 Tunewiki Ltd. Method and system for a "karaoke collage"
US9058797B2 (en) * 2009-12-15 2015-06-16 Smule, Inc. Continuous pitch-corrected vocal capture device cooperative with content server for backing track mix
JP5457867B2 (ja) * 2010-02-08 2014-04-02 Kddi株式会社 画像表示装置、画像表示方法および画像表示プログラム
CN102158745B (zh) * 2011-02-18 2014-11-19 深圳创维数字技术股份有限公司 卡拉ok业务的实现方法、终端、服务器端及系统
CN103021401B (zh) * 2012-12-17 2015-01-07 上海音乐学院 基于互联网的多人异步合唱混音合成方法及合成系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5235124A (en) * 1991-04-19 1993-08-10 Pioneer Electronic Corporation Musical accompaniment playing apparatus having phoneme memory for chorus voices
JP2005010639A (ja) * 2003-06-20 2005-01-13 Yamaha Corp カラオケ装置
CN101345047A (zh) * 2007-07-12 2009-01-14 英业达股份有限公司 人声自动校正的混音系统及其混音方法
CN102456340A (zh) * 2010-10-19 2012-05-16 盛大计算机(上海)有限公司 基于互联网的卡拉ok对唱方法及系统
TW201228290A (en) * 2010-12-28 2012-07-01 Tse-Ming Chang Networking multi-person asynchronous chorus audio/video works system
CN103295568A (zh) * 2013-05-30 2013-09-11 北京小米科技有限责任公司 一种异步合唱方法和装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3306606A4 (en) * 2015-05-27 2019-01-16 Guangzhou Kugou Computer Technology Co., Ltd. METHOD, APPARATUS AND SYSTEM FOR AUDIO PROCESSING
CN110660376A (zh) * 2019-09-30 2020-01-07 腾讯音乐娱乐科技(深圳)有限公司 音频处理方法、装置及存储介质
CN110660376B (zh) * 2019-09-30 2022-11-29 腾讯音乐娱乐科技(深圳)有限公司 音频处理方法、装置及存储介质

Also Published As

Publication number Publication date
BR112015015358A2 (pt) 2017-07-11
EP3007163A4 (en) 2016-12-21
JP2016504618A (ja) 2016-02-12
BR112015015358B1 (pt) 2021-12-07
KR101686632B1 (ko) 2016-12-15
MX2015007251A (es) 2016-03-31
CN103295568B (zh) 2015-10-14
EP3007163A1 (en) 2016-04-13
CN103295568A (zh) 2013-09-11
KR20150079763A (ko) 2015-07-08
RU2015121498A (ru) 2017-03-02
JP6085036B2 (ja) 2017-02-22
MX361534B (es) 2018-12-07
EP3007163B1 (en) 2019-01-02
RU2635835C2 (ru) 2017-11-16

Similar Documents

Publication Publication Date Title
WO2014190786A1 (zh) 一种异步合唱方法和装置
TWI576822B (zh) K歌處理方法及系統
US11120782B1 (en) System, method, and non-transitory computer-readable storage medium for collaborating on a musical composition over a communication network
US10235898B1 (en) Computer implemented method for providing feedback of harmonic content relating to music track
WO2013135167A1 (zh) 一种移动终端处理文本的方法、相关设备及系统
CN105808710A (zh) 一种远程 k 歌终端、远程k 歌系统及远程k 歌方法
US20200027367A1 (en) Remote control of lesson software by teacher
CN201229768Y (zh) 一种电子钢琴
WO2022022395A1 (zh) 文本的时间标注方法、装置、电子设备和可读存储介质
WO2023051246A1 (zh) 视频录制方法、装置、设备及存储介质
JP2019041412A (ja) トラックの取り込み及び転送
CN108109652A (zh) 一种k歌合唱录音的方法
JP6712017B2 (ja) 楽譜提供システム、方法およびプログラム
TWM452421U (zh) 可語音控制之點歌系統
CN107147741B (zh) 基于互联网的音乐创作评选方法、终端、服务器及系统
CN105578107B (zh) 多媒体通话呼叫建立过程和游戏的互动融合方法及装置
CN106777151A (zh) 一种多媒体文件输出方法及装置
US20240233776A9 (en) Systems and methods for lyrics alignment
KR100967125B1 (ko) 네트워크 휴대용 장치에서의 특징 추출
JP2010079069A (ja) 配信装置、配信方法及び配信用プログラム
KR101458526B1 (ko) 공동음원 생성 서비스 시스템 및 그 방법, 그리고 이에 적용되는 장치
KR20080064232A (ko) 음원 검색 시스템 및 방법과, 이를 위한 음원 검색 서버
TWI512500B (zh) 調整多媒體裝置之數位訊號處理設定之方法及系統,及其電腦程式產品
TWI270000B (en) Speech file generating system and method
JP5197189B2 (ja) 歌唱消費カロリーによるキャラクタ表示処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14804158

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20157013606

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2015543298

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: MX/A/2015/007251

Country of ref document: MX

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112015015358

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2015121498

Country of ref document: RU

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2014804158

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 112015015358

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20150625