WO2014190786A1 - Asynchronous chorus method and device - Google Patents
Asynchronous chorus method and device Download PDFInfo
- Publication number
- WO2014190786A1 WO2014190786A1 PCT/CN2014/072300 CN2014072300W WO2014190786A1 WO 2014190786 A1 WO2014190786 A1 WO 2014190786A1 CN 2014072300 W CN2014072300 W CN 2014072300W WO 2014190786 A1 WO2014190786 A1 WO 2014190786A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- file
- chorus
- audio
- audio file
- accompaniment
- Prior art date
Links
- 241001342895 Chorus Species 0.000 title claims abstract description 246
- HAORKNGNJCEJBX-UHFFFAOYSA-N cyprodinil Chemical compound N=1C(C)=CC(C2CC2)=NC=1NC1=CC=CC=C1 HAORKNGNJCEJBX-UHFFFAOYSA-N 0.000 title claims abstract description 243
- 238000000034 method Methods 0.000 title claims abstract description 80
- 238000002372 labelling Methods 0.000 claims description 45
- 230000008859 change Effects 0.000 claims description 13
- 108010001267 Protein Subunits Proteins 0.000 claims 2
- 230000015572 biosynthetic process Effects 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 23
- 230000000694 effects Effects 0.000 abstract description 22
- 238000012545 processing Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 9
- 230000009471 action Effects 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 235000009508 confectionery Nutrition 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/361—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
- G10H1/365—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems the accompaniment information being stored on a host computer and transmitted to a reproducing terminal by means of a network, e.g. public telephone lines
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/361—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/155—Musical effects
- G10H2210/245—Ensemble, i.e. adding one or more voices, also instrumental voices
- G10H2210/251—Chorus, i.e. automatic generation of two or more extra voices added to the melody, e.g. by a chorus effect processor or multiple voice harmonizer, to produce a chorus or unison effect, wherein individual sounds from multiple sources with roughly the same timbre converge and are perceived as one
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/171—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
- G10H2240/175—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments for jam sessions or musical collaboration through a network, e.g. for composition, ensemble playing or repeating; Compensation of network or internet delays therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/023—Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
Definitions
- the present invention is based on the Chinese Patent Application No. 201310210338.5, filed on Jan. Application as a reference. Technical field
- the embodiments of the present disclosure relate to the field of network technologies, and in particular, to an asynchronous chorus method and apparatus. Background technique
- the mobile terminal has a social karaoke application, and the application has built-in reverberation and echo effects, which can modify the user's voice.
- the app also provides the lyrics corresponding to the accompaniment. It can be displayed synchronously when K songs, and can be accurate to each word like KTV.
- the app also provides interesting intelligent scoring function, and the score is obtained. Can be shared with friends.
- a ⁇ song is played on a mobile terminal
- a single ⁇ song is generally used, and after being sung, it is submitted to the server for saving and display, and the rest of the users who use the application can play the song and make an evaluation. If you want more people to sing, you need more users to sing at the same mobile terminal at the same time. After singing, submit it to the server for saving.
- the embodiment of the present disclosure provides an asynchronous chorus method and device to solve the problem that the chorus effect is poor, the processing process is cumbersome, and the cost is high.
- an embodiment of the present disclosure discloses an asynchronous chorus method, the method comprising: after receiving an audio file uploaded by a terminal requesting to use a first accompaniment file to participate in a chorus, marking the audio file has The part that is mixed and not marked is part of the chorus;
- the annotated audio file is determined as a second accompaniment file; wherein the audio file is formed by encoding, by the terminal participating in the chorus using the first accompaniment file, the acquired audio information and the downloaded first accompaniment file.
- the method further comprises: transmitting the second accompaniment file to a terminal requesting to participate in the chorus using the second accompaniment file.
- the part of the audio file that has a mixed sound and is not labeled is used as a part of the chorus, and includes:
- the portion of the audio file that has a mix and is not labeled is labeled as part of the chorus.
- the part of the audio file that has a mixed sound and is not labeled is used as a part of the chorus, and includes:
- the method further includes:
- the volume information is sent to a terminal requesting to participate in the chorus using the second accompaniment file, prompting the user to perform the chorus using the volume.
- the method further includes:
- the initial accompaniment file after the annotation includes at least one paragraph.
- paragraph for chorus marking the initial accompaniment file includes:
- the present disclosure also discloses another asynchronous chorus method, the method comprising: collecting audio information, and encoding the audio information with a first accompaniment file downloaded from a server to form an audio Document
- the portion of the audio file that has a mix and is not labeled is marked as part of the chorus, and the annotated audio file is uploaded to the server.
- the part of the audio file that has a mixed sound and is not labeled is used as a part of the chorus, and includes:
- the portion of the audio file that has a mix and is not labeled is labeled as part of the chorus.
- the part of the audio file that has a mixed sound and is not labeled is used as a part of the chorus, and the package Includes:
- the method further includes:
- the present disclosure also discloses an asynchronous chorus device, characterized in that the device comprises:
- a first labeling module configured to: after receiving an audio file uploaded by a terminal that requests to use the first accompaniment file to participate in the chorus, labeling the portion of the audio file that has a mixed sound and is not marked as a part of the chorus;
- a determining module configured to determine the annotated audio file as a second accompaniment file; wherein the audio file is used by the terminal that requests to use the first accompaniment file to participate in the chorus to perform the collected audio information and the downloaded first accompaniment file
- the code is formed.
- the device further includes:
- a first sending module configured to send the second accompaniment file to a terminal that requests to participate in the chorus using the second accompaniment file.
- the first labeling module includes:
- a first obtaining submodule configured to acquire an audio information location in the audio file
- a first analysis submodule configured to analyze a portion of the audio information location having a sound, the portion of the sound being encoded by the collected audio information and the first accompaniment file;
- the first mix labeling sub-module is used to mark a part of the audio file that has a mix and is not labeled as a part of the chorus.
- the first mixing annotation sub-module includes:
- a first change subunit configured to change a color of the display text corresponding to the portion of the audio file that has a mix and is not labeled
- the device further includes:
- a receiving module configured to: after receiving an audio file uploaded by a terminal that requests to use the first accompaniment file to participate in the chorus, receive the volume information of the collected audio information in the audio file that is requested by the terminal participating in the chorus using the first accompaniment file;
- a second sending module configured to send the volume information to a terminal that requests to use the second accompaniment file to participate in the chorus, and prompt the user to perform the chorus by using the volume.
- the device further includes:
- a second labeling module for marking a paragraph of the initial accompaniment file for chorus
- a third sending module configured to send an initial accompaniment file marked by the second labeling module to a terminal requesting to participate in chorus using the initial accompaniment file
- the initial accompaniment file after the annotation includes at least one paragraph.
- the second labeling module includes:
- a reading submodule configured to read a time interval between each two characters in the initial accompaniment file
- a comparison submodule configured to compare the time interval with a preset threshold
- the text labeling sub-module is configured to mark the end of a paragraph between the two characters when the time interval between the two characters is greater than the preset threshold.
- the present disclosure further discloses another asynchronous chorus device, characterized in that the device comprises:
- An encoding module configured to collect audio information, and encode the audio information with a first accompaniment file downloaded from a server to form an audio file
- the third labeling module is configured to mark the portion of the audio file that has a mix and is not labeled as a part of the chorus, and upload the marked audio file to the server.
- the third labeling module includes:
- a third obtaining submodule configured to acquire an audio information location in the audio file
- a third analysis submodule configured to analyze a portion of the audio information location having a sound, the portion of the sound being encoded by the collected audio information and the first accompaniment file;
- a third mixing annotation sub-module for labeling a portion of the audio file that has a mix and is not labeled as a portion of the chorus.
- the third mixing annotation sub-module includes:
- a third change subunit configured to change a color of the display text corresponding to the portion of the audio file that has a mix and is not labeled
- a third labeling subunit configured to mark, in the name of the audio file, a portion of the audio file that has a mix and is not labeled.
- the device further includes:
- the record uploading module is configured to: after uploading the annotated audio file to the server, record volume information of the collected audio information in the audio file, and upload the volume information to the server.
- the part of the audio file having the mixed sound and not marked is used as the part of the chorus;
- the annotated audio file is determined to be a second accompaniment file; wherein the audio file is for requesting to use the first accompaniment file
- the chorus terminal encodes the collected audio information with the downloaded first accompaniment file.
- the user's terminal can download the audio files of other users' chorus as the accompaniment files of their own chorus, so that the chorus can sing their own parts at different times and in different places without being crowded or sounded by people. There are differences in size and distance, which leads to poor chorus effect.
- each user can sing multiple times or sing part of their own singing while singing their own parts, without affecting the parts sung by other users, so This can lead to a re-singing of the entire song due to a poor user performance.
- FIG. 1 is a flowchart of an asynchronous chorus method according to an exemplary embodiment
- FIG. 2 is a flowchart of an asynchronous chorus method according to an exemplary embodiment 2;
- FIG. 3 is a flowchart of an asynchronous chorus method according to an exemplary embodiment 3;
- FIG. 4 is a flowchart of an asynchronous chorus method according to an exemplary embodiment 4;
- FIG. 5 is a schematic diagram of an annotated initial accompaniment file according to an exemplary embodiment 5;
- FIG. 6 is a structural block diagram of an asynchronous chorus device according to an exemplary embodiment 6;
- FIG. 7 is a structural block diagram of an asynchronous chorus device according to an exemplary embodiment 7;
- FIG. 8 is a structural block diagram of an asynchronous chorus apparatus according to an exemplary embodiment 8; FIG.
- FIG. 9 is a structural block diagram of an asynchronous chorus apparatus according to an exemplary embodiment 9. detailed description
- the chorus can sing its own parts at different times and in different places without being cumbersome due to crowded people or differences in size and distance of the sound, and each The user can process the part he sings separately without affecting the part that other users sing.
- Embodiment 1 is a diagrammatic representation of Embodiment 1:
- chorus when chorus is performed, multiple users are required to sing at the same mobile terminal at the same time, and after the sing is finished, they are submitted to the server for processing, and the vocalization of the user may have a difference in size and distance, resulting in poor chorus effect; Moreover, if there is a user who performs poorly during the chorus, the song may be re-chosed, and the server needs to re-process the song of the chorus. The process is cumbersome and costly.
- FIG. 1 a flowchart of an asynchronous chorus method according to Embodiment 1 of the present disclosure is shown. Includes:
- step 101 after receiving the audio file requested by the terminal participating in the chorus using the first accompaniment file, the portion of the audio file having the mixed sound and not marked is used as the chorus portion.
- the terminal provided by the embodiment of the present disclosure may be a smart phone, a tablet, or the like.
- the first accompaniment file may be downloaded from the server first, and then the terminal that uses the first accompaniment file to participate in the chorus collects the audio information of the user, and may collect the audio information.
- the audio information is encoded with the first accompaniment file downloaded by the terminal requesting the accompaniment using the first accompaniment file to form an audio file corresponding to the terminal requesting to participate in the chorus using the first accompaniment file, and uploaded to the server.
- the server may mark the portion of the audio file that has a mix and is not marked as a part of the chorus.
- the audio file is a terminal that requests to use the first accompaniment file to participate in the chorus to encode the collected audio information and the downloaded first accompaniment file.
- step 102 the annotated accompaniment file is determined as the second accompaniment file.
- the labeled accompaniment file may be determined as the second accompaniment file.
- the second accompaniment file described above can be downloaded from the server, and the second accompaniment file can be directly used to participate in the chorus.
- the user's terminal can download the audio files of other users' chorus as the accompaniment files of their own chorus, so that the chorus can sing their own parts at different times and in different places without Due to the crowded people or the difference in size and distance between the sounds, the chorus effect is poor.
- each user can sing multiple times or sing the part they sing separately without affecting other users while singing their own parts. The part of the singer, therefore, does not lead to the re-singing of the entire song due to a poor performance of one user.
- Embodiment 2 is a diagrammatic representation of Embodiment 1:
- FIG. 2 a flowchart of an asynchronous chorus method according to Embodiment 2 of the present disclosure is shown, which may include:
- step 201 the paragraph for the chorus of the initial accompaniment file is marked, and the annotated initial accompaniment file is sent to the terminal requesting to participate in the chorus using the initial accompaniment file.
- the initial accompaniment file may be annotated first, and the annotated initial accompaniment file may be sent to a terminal requesting to participate in the chorus using the initial accompaniment file.
- the labeled initial accompaniment file may include at least one paragraph.
- the initial accompaniment file can be automatically annotated with a read time interval. Therefore
- the above process for marking the passage of the initial accompaniment file for chorus may include:
- the annotation proposed in the embodiment of the present disclosure may be marked with a special symbol (for example, a dot) between two characters.
- a special symbol for example, a dot
- the two characters exist in two paragraphs in the accompaniment file, and the special symbol can be used as a The mark at the end of the paragraph; or distinguish between male and female singers, between the two words marked "male:” or "female:”, at this time, the two words exist in the two paragraphs in the accompaniment file, the mark "male: "Or” female: “Can be used as a mark for the end of a paragraph.
- the initial accompaniment file may be labeled in other manners, for example, by using different color labels, etc., and the embodiment of the present disclosure does not limit this.
- the labeling can be made more accurate.
- the specific value of the above-mentioned preset threshold value those skilled in the art can set according to actual experience.
- the embodiment of the present disclosure may also mark the initial accompaniment file in other manners, for example, according to the pitch of the accompaniment, and the like, which is not limited by the embodiment of the present disclosure.
- step 202 after receiving the audio file requested by the terminal participating in the chorus using the first accompaniment file, the part of the audio file having the mixed sound and not marked is used as the part of the chorus.
- chorus can be performed by multiple users through different terminals.
- the terminal requesting to participate in the chorus can collect the audio information of the user, and then encode the audio information with the accompaniment file downloaded by the terminal participating in the chorus to form an audio file, and finally can The encoded audio file is uploaded to the server.
- the server may mark the portion of the audio file that has a mix and is not marked as a part of the chorus.
- the process of the above-mentioned labeled audio file having a mixed and unlabeled portion as a part of the chorus may include: bl, obtaining an audio information position in the audio file; Hi, analyzing the portion of the audio information position that has a mix, and the portion of the mix is formed by encoding the collected audio information with the first accompaniment file;
- the portion of the audio file that has a mix and is not labeled may be marked in the following manner:
- the audio file may have a mix and is not The color of the displayed text corresponding to the marked part is marked in red); or the text in the audio file is marked with a portion of the audio file that has a mix and is not marked (for example, the text in the name of the audio file can be used to indicate Which part I sang).
- the above two types of labeling may be performed on the portion of the audio file that has been mixed and not labeled. The embodiment of the present disclosure does not limit this.
- the portion of the audio file that has a mix and is not labeled may be marked as a part of the chorus in other manners, for example, the text corresponding to the part with the mix and not marked is bolded, and the like.
- the embodiment of the present disclosure does not limit this.
- step 203 the annotated audio file is determined as the second accompaniment file.
- the labeled audio file can be determined as the second accompaniment file, and the user requesting the terminal participating in the chorus using the second accompaniment file can sing according to the second accompaniment file.
- the second accompaniment file is sent to the terminal requesting to participate in the chorus using the second accompaniment file.
- the server may transmit the determined second accompaniment file to the terminal requesting to participate in the chorus using the second accompaniment file. Since different paragraphs are marked in the initial accompaniment file, the user who requests the terminal to participate in the chorus using the second accompaniment file can chorus the paragraph corresponding to itself according to the label in the initial accompaniment file, and is marked in the second accompaniment file downloaded.
- the chorus part is chorus.
- the collected audio information may be encoded with the second accompaniment file, an audio file is generated, and the audio file is uploaded to the server, and then the audio file is uploaded to the server.
- the audio file uploaded by the terminal participating in the chorus is requested to be marked with the second accompaniment file, and the above process is repeated.
- step 205 volume information belonging to the collected audio information among the audio files requested by the terminal participating in the chorus using the first accompaniment file is received.
- a volume reminder may also be adopted in the embodiment of the present disclosure.
- the terminal can record the volume information of the collected audio information, and then upload the volume information of the collected audio information to the server.
- the audio information uploaded by the terminal requesting to use the first accompaniment file to participate in the chorus may be received.
- the volume information is sent to the terminal requesting to participate in the chorus using the second accompaniment file, prompting the user to perform the chorus with the volume.
- the server may send the volume information to the terminal requesting to use the second accompaniment file to participate in the chorus, so that the user of the terminal can be prompted to perform the chorus using the above volume.
- the user who requests the terminal participating in the chorus using the second accompaniment file can adjust the chorus volume of the user who uses the first accompaniment file to participate in the chorus terminal as requested, thereby further enhancing the effect of the chorus.
- step 203 and step 205 may be performed side by side
- step 204 and step 206 may be performed side by side, and so on, the specific sequence of the above steps is not used in the embodiment of the present disclosure. limit.
- the chorus can sing its own parts at different times and in different places, without causing poor chorus effect due to crowdedness of the people or the difference in size and distance of the sound, and each The user can separately process the part that he sings without affecting the part that other users sing; in addition, the embodiment of the present disclosure can also record the volume information of the user chorus of the terminal requesting to participate in the chorus, and prompt the next request to participate. The user of the chorus terminal uses the volume to perform the chorus, thereby further enhancing the effect of the chorus.
- the first embodiment and the second embodiment are mainly introduced from the server side to the asynchronous chorus method.
- the following describes the third embodiment and the fourth embodiment from the terminal side.
- Embodiment 3 is a diagrammatic representation of Embodiment 3
- FIG. 4 a flowchart of an asynchronous chorus method according to Embodiment 3 of the present disclosure is shown, which may include:
- step 301 audio information is collected, and the audio information is encoded with the first accompaniment file downloaded from the server to form an audio file.
- the first accompaniment file may first be downloaded from the server, and then when the user performs the chorus, the terminal may collect the audio information of the user, and then download the audio information with the slave server.
- the first accompaniment file is encoded to form an audio file.
- step 302 the portion of the audio file that has a mix and is not labeled is part of the chorus, and the annotated audio file is uploaded to the server.
- the terminal may mark the portion of the audio file that has a mix and is not labeled as a chorus portion, and upload the marked audio file to the server.
- the server may use the labeled audio file as the second accompaniment file, and the terminal requesting to use the second accompaniment file to participate in the chorus may download the second accompaniment file from the server and directly use the second accompaniment.
- the document participates in the chorus.
- the user's terminal can download audio files sung by other users.
- the piece is used as an accompaniment file of its own chorus, so that the chorus can sing his own parts at different times and in different places, without causing poor chorus effect due to crowded people or differences in size and distance.
- Embodiment 4 is a diagrammatic representation of Embodiment 4:
- FIG. 4 a flowchart of an asynchronous chorus method according to Embodiment 4 of the present disclosure is shown, which may include:
- step 401 audio information is collected, and the audio information is encoded with the first accompaniment file downloaded from the server to form an audio file.
- the terminal requesting to participate in the chorus using the first accompaniment file can collect the audio information of the chorus user, and then encode the audio information with the first accompaniment file downloaded from the server to form an audio file.
- step 402 the portion of the audio file that has a mix and is not labeled is part of the chorus, and the annotated audio file is uploaded to the server.
- the foregoing labeling process may be implemented by requesting a terminal participating in the chorus.
- the process of having the mixed and unlabeled portion of the annotated audio file as the chorus portion may include:
- the portion of the audio file that is mixed and unlabeled in the above-mentioned c3 may be implemented as the part of the chorus in the following manner:
- the implementation of the present disclosure can also perform the above two annotations on portions of the audio file that have a mix and are not labeled.
- step 403 the volume information belonging to the collected audio information in the audio file is recorded, and the volume information is uploaded to the server.
- a volume reminder may also be adopted in the embodiment of the present disclosure.
- the terminal can record the volume information of the collected audio information, and then upload the volume information of the collected audio information to the server.
- the server may use the above-mentioned labeled audio file as the second accompaniment file, and the user who requests the terminal using the second accompaniment file is participating.
- the second accompaniment file can be directly downloaded from the server for chorus, and the above volume information can be acquired at the same time to prompt the user to use the volume for chorus.
- the above various processes in the embodiments of the present disclosure are not limited to the execution of the terminal requesting to participate in the chorus using the first accompaniment file, and any one terminal can be executed.
- the volume information of the user chorus of the terminal requesting the chorus may be recorded, and the user of the terminal requesting the chorus is prompted to perform the chorus by using the volume, thereby further improving the effect of the chorus.
- Embodiment 5 is a diagrammatic representation of Embodiment 5:
- the initial accompaniment file is marked by the server, and the initial accompaniment file after labeling is shown in FIG. 5.
- the initial accompaniment file after the annotation may include three parts A, B, and C.
- the three terminals may sing the three parts separately.
- the user A's terminal may sing the label A.
- the part marked B is sung by the terminal of the user B
- the part marked C is sung by the terminal of the user C.
- the specific process for labeling is described in the description of the server side below.
- Terminal side The asynchronous chorus method proposed by the embodiment of the present disclosure will be described below for the terminal side and the server side, respectively.
- User A's terminal downloads and plays the initial accompaniment file marked above from the server, and user A sings the part marked with A.
- the terminal of user A collects the audio information of user A, and records the volume information of the audio information of user A.
- the terminal of user A encodes the collected audio information and the initial accompaniment file to generate a song XI (ie, an audio file), and passes The terminal of the user A uploads the song XI and the volume information of the above user A to the server.
- a song XI ie, an audio file
- the user B's terminal downloads and plays the song XI from the server, uses it as a accompaniment, and prompts the user B to sing according to the volume information of the user A (for example, prompting in a waveform form), and the user B continues to sing. Mark the part of B.
- the terminal of the user B collects the audio information of the user B, and records the volume information of the audio information of the user B.
- the terminal of the user B encodes the collected audio information and the song XI to generate a song X2, and the song is sent through the terminal of the user B.
- the volume information of X2 and the above user B is uploaded to the server.
- the terminal of the user C downloads and plays the song X2 from the server, uses it as an accompaniment, and prompts the volume of the user C to sing according to the volume information of the user B, and the user C continues to sing the part marked with C.
- the terminal of the user C collects the audio information of the user C, and records the volume information of the audio information of the user C.
- the terminal of the user C encodes the collected audio information and the song X2 to generate a song X3, and the song is played through the terminal of the user C.
- the volume information of X3 and the above user C is uploaded to the server, thereby completing the entire song.
- the initial accompaniment file may be labeled in other ways, which is not limited by the embodiment of the present disclosure.
- the specific process of the labeling reference may be made to the related description of the second embodiment, and the embodiments of the present disclosure are not discussed in detail herein.
- the initial file may be marked as three parts, as shown in FIG. 3, that is, labeled as part A, part B, and part C.
- processing the song uploaded by the terminal ie audio file
- determining the sung part of the song according to the audio information location of the song For example, the portion of the song that has a mix can be analyzed.
- the related description of the second embodiment For a specific process, refer to the related description of the second embodiment.
- the lyrics of the part A that has been sung can be marked with different colors, or the part A of the song XI is marked, and the color of the lyrics and the name of the song can be marked at the same time.
- the foregoing labeling process is implemented by a server. It should be noted that the labeling process may also be implemented by a terminal, which is not limited by the embodiment of the disclosure.
- the chorus can sing his own parts at different times and in different places.
- Each user can sing multiple times or sing their own voice separately when they sing their own parts, without affecting other users (for example, reverb or other sound effects can be provided, the part that the user sings himself There are special effects while the other users' sounds are unchanged).
- the device may be a server-side device that interacts with the terminal side.
- the above device may include:
- the first labeling module 601 is configured to: after receiving the audio file uploaded by the terminal requesting to participate in the chorus, label the portion of the audio file that has a mix and is not marked as a part of the chorus;
- the determining module 602 is configured to determine the annotated audio file as the second accompaniment file.
- the audio file is formed by encoding the collected audio information and the downloaded first accompaniment file for the terminal requesting to use the first accompaniment file to participate in the chorus.
- the user's terminal can download the audio files of other users' chorus as the accompaniment files of their own chorus, so that the chorus can sing their own parts at different times and in different places without Due to the crowded people or the difference in size and distance between the sounds, the chorus effect is poor.
- each user can sing multiple times or sing the part they sing separately without affecting other users while singing their own parts. The part of the singer, therefore, does not lead to the re-singing of the entire song due to a poor performance of one user.
- Example 7 Referring to FIG. 7, a structural block diagram of an asynchronous chorus device according to Embodiment 7 of the present disclosure is shown.
- the device may be a server-side device that interacts with the terminal side.
- the device can include:
- a second labeling module 701, configured to mark a paragraph of the initial accompaniment file for chorus
- the second labeling module 701 can include:
- the reading sub-module 7011 is configured to read a time interval between each two characters in the initial accompaniment file; the comparison sub-module 7012 is configured to compare the time interval with a preset threshold;
- the text labeling sub-module 7013 is configured to mark the end of a paragraph between two characters when the time interval between the two characters is greater than a preset threshold.
- a third sending module 702 configured to send an initial accompaniment file marked by the second labeling module to a terminal that requests to use the initial accompaniment file to participate in the chorus;
- the initial accompaniment file after the annotation includes at least one paragraph.
- the first labeling module 703 is configured to: after receiving the audio file uploaded by the terminal that requests to use the first accompaniment file to participate in the chorus, mark the portion of the audio file that has a mixed sound and is not labeled as a part of the chorus;
- the first labeling module 703 can include:
- a first obtaining submodule 7031 configured to acquire an audio information location in the audio file
- a first analysis sub-module 7032 configured to analyze a portion of the audio information location having a sound, and the portion of the sound is encoded by the collected audio information and the first accompaniment file;
- the first mix labeling sub-module 7033 is used to mark the portion of the audio file that has a mix and is not labeled as a part of the chorus.
- the first mix labeling sub-module 7033 can include:
- a first change subunit configured to change a color of a display text corresponding to a portion of the audio file that has a mix and is not labeled
- the first labeling subunit is used to mark, in the name of the audio file, the portion of the audio file that has a mix and is not marked.
- a determining module 704 configured to determine the annotated audio file as a second accompaniment file
- the audio file is formed by encoding the collected audio information and the downloaded first accompaniment file for the terminal requesting to use the first accompaniment file to participate in the chorus.
- the first sending module 705 is configured to send the second accompaniment file to the end of the request to use the second accompaniment file to participate in the chorus
- the receiving module 706 is configured to: after receiving the audio file uploaded by the terminal that requests to use the first accompaniment file to participate in the chorus, receive the volume information of the audio information that is requested to be collected by the terminal that participates in the chorus using the first accompaniment file;
- a second sending module 707 configured to send the volume information to the terminal that requests to use the second accompaniment file to participate in the chorus, Prompt the user to use the volume for chorus.
- the chorus can sing its own parts at different times and in different places without being cumbersome due to crowded people or the difference in size and distance of the sound, and each The user can separately process the part that he sings without affecting the part that other users sing; in addition, the embodiment of the present disclosure can also record the volume information of the user chorus of the terminal requesting to participate in the chorus, and prompt the next request to participate. The user of the chorus terminal uses the volume to perform the chorus, thereby further enhancing the effect of the chorus.
- the device may be a device on the terminal side, which interacts with the server side.
- the above device may include:
- the encoding module 801 is configured to collect audio information, and encode the audio information with the first accompaniment file downloaded from the server to form an audio file;
- the third labeling module 802 is configured to label the portion of the audio file that has a mix and is not labeled as a part of the chorus, and upload the marked audio file to the server.
- the user's terminal can download the audio files of other users' chorus as the accompaniment files of their own chorus, so that the chorus can sing their own parts at different times and in different places without The chorus effect is poor due to crowded people or differences in size and distance.
- the device may be a device on the terminal side, which interacts with the server side.
- the above device may include:
- the encoding module 901 is configured to collect audio information, and encode the audio information with the first accompaniment file downloaded from the server to form an audio file;
- a third labeling module 902 configured to mark a portion of the audio file that has a mix and is not labeled as a part of the chorus, and upload the marked audio file to the server;
- the third annotation module 902 can include:
- a third obtaining sub-module 9021 configured to acquire an audio information location in the audio file
- a third analysis sub-module 9022 configured to analyze a portion of the audio information location having a sound, and the portion of the sound is encoded by the collected audio information and the first accompaniment file;
- the third mix labeling sub-module 9023 is used to mark the portion of the audio file that has a mix and is not labeled as a part of the chorus.
- the third mix labeling sub-module 9023 can include:
- a third change subunit configured to change a color of the display text corresponding to the portion of the audio file that has a mix and is not labeled
- the record uploading module 903 is configured to upload the annotated audio file to the server, record the volume information of the collected audio information in the audio file, and upload the volume information to the server.
- the asynchronous chorus device proposed in the embodiment of the present disclosure can record the volume information of the user chorus of the terminal requesting the chorus, and prompts the user of the terminal requesting the chorus to use the volume to perform the chorus, thereby further improving the effect of the chorus. .
- the invention may be described in the general context of computer-executable instructions executed by a computer, such as a program module.
- program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
- the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
- program modules can be located in both local and remote computer storage media including storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- General Engineering & Computer Science (AREA)
- Electrophonic Musical Instruments (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
Description
Claims
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020157013606A KR101686632B1 (en) | 2013-05-30 | 2014-02-20 | Asynchronous chorus method, device, program and recording medium |
RU2015121498A RU2635835C2 (en) | 2013-05-30 | 2014-02-20 | Method and device for asynchronous choral performance |
EP14804158.5A EP3007163B1 (en) | 2013-05-30 | 2014-02-20 | Asynchronous chorus method and device |
JP2015543298A JP6085036B2 (en) | 2013-05-30 | 2014-02-20 | Asynchronous chorus method, asynchronous chorus apparatus, program, and recording medium |
BR112015015358-5A BR112015015358B1 (en) | 2013-05-30 | 2014-02-20 | METHOD AND DEVICE FOR ASYNCHRONOUS CHORUS |
MX2015007251A MX361534B (en) | 2013-05-30 | 2014-02-20 | Asynchronous chorus method and device. |
US14/296,801 US9224374B2 (en) | 2013-05-30 | 2014-06-05 | Methods and devices for audio processing |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310210338.5 | 2013-05-30 | ||
CN201310210338.5A CN103295568B (en) | 2013-05-30 | 2013-05-30 | A kind of asynchronous chorus method and apparatus |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/296,801 Continuation US9224374B2 (en) | 2013-05-30 | 2014-06-05 | Methods and devices for audio processing |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014190786A1 true WO2014190786A1 (en) | 2014-12-04 |
Family
ID=49096329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/072300 WO2014190786A1 (en) | 2013-05-30 | 2014-02-20 | Asynchronous chorus method and device |
Country Status (8)
Country | Link |
---|---|
EP (1) | EP3007163B1 (en) |
JP (1) | JP6085036B2 (en) |
KR (1) | KR101686632B1 (en) |
CN (1) | CN103295568B (en) |
BR (1) | BR112015015358B1 (en) |
MX (1) | MX361534B (en) |
RU (1) | RU2635835C2 (en) |
WO (1) | WO2014190786A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3306606A4 (en) * | 2015-05-27 | 2019-01-16 | Guangzhou Kugou Computer Technology Co., Ltd. | Audio processing method, apparatus and system |
CN110660376A (en) * | 2019-09-30 | 2020-01-07 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio processing method, device and storage medium |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103295568B (en) * | 2013-05-30 | 2015-10-14 | 小米科技有限责任公司 | A kind of asynchronous chorus method and apparatus |
US9224374B2 (en) | 2013-05-30 | 2015-12-29 | Xiaomi Inc. | Methods and devices for audio processing |
CN105006234B (en) * | 2015-05-27 | 2018-06-29 | 广州酷狗计算机科技有限公司 | A kind of K sings processing method and processing device |
CN105023559A (en) * | 2015-05-27 | 2015-11-04 | 腾讯科技(深圳)有限公司 | Karaoke processing method and system |
CN106486128B (en) * | 2016-09-27 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Method and device for processing double-sound-source audio data |
CN106686431B (en) * | 2016-12-08 | 2019-12-10 | 杭州网易云音乐科技有限公司 | Audio file synthesis method and device |
CN106601220A (en) * | 2016-12-08 | 2017-04-26 | 天脉聚源(北京)传媒科技有限公司 | Method and device for recording antiphonal singing of multiple persons |
CN108630240B (en) * | 2017-03-23 | 2020-05-26 | 北京小唱科技有限公司 | Chorus method and apparatus |
CN107993637B (en) * | 2017-11-03 | 2021-10-08 | 厦门快商通信息技术有限公司 | Karaoke lyric word segmentation method and system |
CN108109652A (en) * | 2017-12-27 | 2018-06-01 | 北京酷我科技有限公司 | A kind of method of K songs chorus recording |
EP3815031A4 (en) * | 2018-06-29 | 2022-04-27 | Smule, Inc. | Audiovisual collaboration system and method with seed/join mechanic |
CN109147746B (en) * | 2018-07-27 | 2021-07-16 | 维沃移动通信有限公司 | Karaoke method and terminal |
US11693616B2 (en) * | 2019-08-25 | 2023-07-04 | Smule, Inc. | Short segment generation for user engagement in vocal capture applications |
CN111326132B (en) | 2020-01-22 | 2021-10-22 | 北京达佳互联信息技术有限公司 | Audio processing method and device, storage medium and electronic equipment |
CN111462767B (en) * | 2020-04-10 | 2024-01-09 | 全景声科技南京有限公司 | Incremental coding method and device for audio signal |
CN112312163B (en) * | 2020-10-30 | 2024-05-28 | 北京字跳网络技术有限公司 | Video generation method, device, electronic equipment and storage medium |
CN116704978A (en) * | 2022-02-28 | 2023-09-05 | 北京字跳网络技术有限公司 | Music generation method, device, apparatus, storage medium, and program |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5235124A (en) * | 1991-04-19 | 1993-08-10 | Pioneer Electronic Corporation | Musical accompaniment playing apparatus having phoneme memory for chorus voices |
JP2005010639A (en) * | 2003-06-20 | 2005-01-13 | Yamaha Corp | Karaoke machine |
CN101345047A (en) * | 2007-07-12 | 2009-01-14 | 英业达股份有限公司 | Sound mixing system and method for automatic human voice correction |
CN102456340A (en) * | 2010-10-19 | 2012-05-16 | 盛大计算机(上海)有限公司 | Karaoke in-pair singing method based on internet and system thereof |
TW201228290A (en) * | 2010-12-28 | 2012-07-01 | Tse-Ming Chang | Networking multi-person asynchronous chorus audio/video works system |
CN103295568A (en) * | 2013-05-30 | 2013-09-11 | 北京小米科技有限责任公司 | Asynchronous chorusing method and asynchronous chorusing device |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3241372B2 (en) * | 1990-11-27 | 2001-12-25 | パイオニア株式会社 | Karaoke performance method |
JP2006195215A (en) * | 2005-01-14 | 2006-07-27 | Sony Ericsson Mobilecommunications Japan Inc | Communication terminal device and musical performance system |
JP4431507B2 (en) * | 2005-01-31 | 2010-03-17 | 株式会社第一興商 | Karaoke system |
US20070163428A1 (en) * | 2006-01-13 | 2007-07-19 | Salter Hal C | System and method for network communication of music data |
JP4382786B2 (en) * | 2006-08-22 | 2009-12-16 | 株式会社タイトー | Audio mixdown device, audio mixdown program |
US20080184870A1 (en) * | 2006-10-24 | 2008-08-07 | Nokia Corporation | System, method, device, and computer program product providing for a multiple-lyric karaoke system |
JP2009031549A (en) * | 2007-07-27 | 2009-02-12 | Yamaha Corp | Melody display control device and karaoke device |
JP5014073B2 (en) * | 2007-11-12 | 2012-08-29 | ヤマハ株式会社 | Melody display control device and karaoke device |
DE102008008388A1 (en) * | 2008-02-09 | 2009-08-13 | Cambiz Seyed-Asgari | Digital multi-track recording and reproduction system for e.g. audio content, has storage medium for storage of multimedia contents in digital data format, where audio tracks are stored as individual tracks on system |
JP2010014823A (en) * | 2008-07-01 | 2010-01-21 | Nippon Telegr & Teleph Corp <Ntt> | Musical piece information control device |
WO2010041147A2 (en) * | 2008-10-09 | 2010-04-15 | Futureacoustic | A music or sound generation system |
US20110126103A1 (en) * | 2009-11-24 | 2011-05-26 | Tunewiki Ltd. | Method and system for a "karaoke collage" |
US9147385B2 (en) * | 2009-12-15 | 2015-09-29 | Smule, Inc. | Continuous score-coded pitch correction |
JP5457867B2 (en) * | 2010-02-08 | 2014-04-02 | Kddi株式会社 | Image display device, image display method, and image display program |
CN102158745B (en) * | 2011-02-18 | 2014-11-19 | 深圳创维数字技术股份有限公司 | Implementation method of Karaoke service, terminal, server terminal and system |
CN103021401B (en) * | 2012-12-17 | 2015-01-07 | 上海音乐学院 | Internet-based multi-people asynchronous chorus mixed sound synthesizing method and synthesizing system |
-
2013
- 2013-05-30 CN CN201310210338.5A patent/CN103295568B/en active Active
-
2014
- 2014-02-20 WO PCT/CN2014/072300 patent/WO2014190786A1/en active Application Filing
- 2014-02-20 JP JP2015543298A patent/JP6085036B2/en active Active
- 2014-02-20 MX MX2015007251A patent/MX361534B/en active IP Right Grant
- 2014-02-20 RU RU2015121498A patent/RU2635835C2/en active
- 2014-02-20 KR KR1020157013606A patent/KR101686632B1/en active IP Right Grant
- 2014-02-20 BR BR112015015358-5A patent/BR112015015358B1/en active IP Right Grant
- 2014-02-20 EP EP14804158.5A patent/EP3007163B1/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5235124A (en) * | 1991-04-19 | 1993-08-10 | Pioneer Electronic Corporation | Musical accompaniment playing apparatus having phoneme memory for chorus voices |
JP2005010639A (en) * | 2003-06-20 | 2005-01-13 | Yamaha Corp | Karaoke machine |
CN101345047A (en) * | 2007-07-12 | 2009-01-14 | 英业达股份有限公司 | Sound mixing system and method for automatic human voice correction |
CN102456340A (en) * | 2010-10-19 | 2012-05-16 | 盛大计算机(上海)有限公司 | Karaoke in-pair singing method based on internet and system thereof |
TW201228290A (en) * | 2010-12-28 | 2012-07-01 | Tse-Ming Chang | Networking multi-person asynchronous chorus audio/video works system |
CN103295568A (en) * | 2013-05-30 | 2013-09-11 | 北京小米科技有限责任公司 | Asynchronous chorusing method and asynchronous chorusing device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3306606A4 (en) * | 2015-05-27 | 2019-01-16 | Guangzhou Kugou Computer Technology Co., Ltd. | Audio processing method, apparatus and system |
CN110660376A (en) * | 2019-09-30 | 2020-01-07 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio processing method, device and storage medium |
CN110660376B (en) * | 2019-09-30 | 2022-11-29 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio processing method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
EP3007163B1 (en) | 2019-01-02 |
CN103295568A (en) | 2013-09-11 |
KR101686632B1 (en) | 2016-12-15 |
CN103295568B (en) | 2015-10-14 |
JP6085036B2 (en) | 2017-02-22 |
KR20150079763A (en) | 2015-07-08 |
BR112015015358B1 (en) | 2021-12-07 |
RU2635835C2 (en) | 2017-11-16 |
MX2015007251A (en) | 2016-03-31 |
MX361534B (en) | 2018-12-07 |
JP2016504618A (en) | 2016-02-12 |
RU2015121498A (en) | 2017-03-02 |
EP3007163A4 (en) | 2016-12-21 |
EP3007163A1 (en) | 2016-04-13 |
BR112015015358A2 (en) | 2017-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014190786A1 (en) | Asynchronous chorus method and device | |
TWI576822B (en) | Processing method of making song request and system thereof | |
US11120782B1 (en) | System, method, and non-transitory computer-readable storage medium for collaborating on a musical composition over a communication network | |
US10235898B1 (en) | Computer implemented method for providing feedback of harmonic content relating to music track | |
WO2016188211A1 (en) | Audio processing method, apparatus and system | |
CN105808710A (en) | Remote karaoke terminal, remote karaoke system and remote karaoke method | |
US20200027367A1 (en) | Remote control of lesson software by teacher | |
CN201229768Y (en) | Electronic piano | |
CN105766001A (en) | System and method for audio processing using arbitrary triggers | |
WO2022022395A1 (en) | Time marking method and apparatus for text, and electronic device and readable storage medium | |
WO2023051246A1 (en) | Video recording method and apparatus, device, and storage medium | |
JP2019041412A (en) | Track trapping and transfer | |
CN108109652A (en) | A kind of method of K songs chorus recording | |
TWM452421U (en) | Voice activation song serach system | |
CN107147741B (en) | Music creation selecting method, terminal, server and system based on Internet | |
CN106777151A (en) | A kind of multimedia file output intent and device | |
US20240233776A9 (en) | Systems and methods for lyrics alignment | |
KR100967125B1 (en) | Feature extraction in a networked portable device | |
JP2010079069A (en) | Delivery device, delivery method, and program for delivery | |
KR101458526B1 (en) | System and method for collaboration music, and apparatus applied to the same | |
JP7063533B2 (en) | Karaoke system | |
TWI512500B (en) | Methods and systems of adjusting digital signal processing settings for multimedia devices, and computer program products thereof | |
TWI270000B (en) | Speech file generating system and method | |
JP5197189B2 (en) | Character display processing method by calorie consumption | |
Ong | Unmasking The Phantom: An Analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14804158 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20157013606 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2015543298 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2015/007251 Country of ref document: MX |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112015015358 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2015121498 Country of ref document: RU Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014804158 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 112015015358 Country of ref document: BR Kind code of ref document: A2 Effective date: 20150625 |