CN108630240B - Chorus method and apparatus - Google Patents

Chorus method and apparatus Download PDF

Info

Publication number
CN108630240B
CN108630240B CN201710179795.0A CN201710179795A CN108630240B CN 108630240 B CN108630240 B CN 108630240B CN 201710179795 A CN201710179795 A CN 201710179795A CN 108630240 B CN108630240 B CN 108630240B
Authority
CN
China
Prior art keywords
lyric
clause
sung
determining
recorded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710179795.0A
Other languages
Chinese (zh)
Other versions
CN108630240A (en
Inventor
陈华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaochang Technology Co ltd
Original Assignee
Beijing Xiaochang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaochang Technology Co ltd filed Critical Beijing Xiaochang Technology Co ltd
Priority to CN201710179795.0A priority Critical patent/CN108630240B/en
Publication of CN108630240A publication Critical patent/CN108630240A/en
Application granted granted Critical
Publication of CN108630240B publication Critical patent/CN108630240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • G11B2020/10537Audio or video recording
    • G11B2020/10546Audio or video recording specifically adapted for audio data

Abstract

The invention provides a chorus method, which comprises the following steps: determining the number of first type time segments in each time segment corresponding to the lyric clause aiming at each lyric clause when the lyric clause is recorded for the first time; judging whether the lyric clauses are sung; when judging that the lyric clause is sung, recording the starting time and the ending time corresponding to the lyric clause to a mark list; and when the recording is finished, uploading the mark list, the lyrics corresponding to the song to be recorded and the recorded audio file to a server. And judging each lyric clause, marking the sung lyric clause, and uploading the marks, the lyrics corresponding to the song to be recorded and the recorded audio file to a server. The mark can be obtained when the chorus person downloads the recorded chorus semi-finished product through the client, and the client can judge which lyrics are sung and which are not sung through the mark, thereby differentiating and displaying the lyrics which are sung and not sung.

Description

Chorus method and apparatus
Technical Field
The invention relates to the technical field of multimedia processing, in particular to a chorus method and a chorus device.
Background
Singing applications are a class of applications that are currently well liked by users, and can provide users with on-line solo and chorus services.
When the user sings, the current song chorus scheme is as follows: the method comprises the following steps that a user initiating chorus sings a part of a target song, a first client records the chorus semi-finished product, and the first client sends the chorus semi-finished product to a server; and the second client downloads the chorus semi-finished product, and in the process of playing the chorus semi-finished product, the other part of the target song is sung by the user participating in chorus, the second client records the chorus work to obtain a complete chorus work, and the user participating in chorus sends the complete chorus work to the server through the second client.
Although the chorus operation can be completed through the chorus mode at present, in the specific implementation process, when a user participating in chorus obtains a chorus semi-finished product from a client, the user cannot know which parts of the chorus semi-finished product are sung and which parts are not sung, so that part of lyrics is sung missed or are sung repeatedly, and the use experience of the user is influenced.
Disclosure of Invention
The invention provides a chorusing method and a chorusing system, which aim to solve the problem that when a chorusing semi-finished product is recorded, a user participating in chorusing cannot know which parts are sung and which parts are not sung in the chorusing semi-finished product, so that the use experience of the user is influenced.
In order to solve the above problems, the present invention discloses a chorus method, comprising: determining the number of first type time segments in each time segment corresponding to each lyric clause when the lyric clause is recorded for the first time; monitoring a sound signal meeting a set frequency spectrum characteristic under the first type time slice; judging whether the lyric clause is sung or not according to the number of the first type time segments; when the lyric clause is judged to be sung, recording the starting time and the ending time corresponding to the lyric clause to a mark list; and when the recording is finished, uploading the mark list, the lyrics corresponding to the song to be recorded and the recorded audio file to a server.
Preferably, the step of judging whether the lyric clause is sung according to the number of the first type time segments comprises: when the number of the first type time segments is larger than or equal to a preset value, determining that the lyric clause is sung; and when the number of the first type time segments is smaller than the preset value, determining that the lyric clause is not sung.
Preferably, the step of judging whether the lyric clause is sung according to the number of the first type time segments comprises: determining the total number of time segments corresponding to the lyric clauses; and judging whether the lyric clause is sung or not according to the total number and the number of the first type time segments.
Preferably, the step of determining whether the lyric clause was sung according to the total number and the number of the first type time segments includes: comparing the result of multiplying the number of the first type time segments by a preset coefficient with the total number; if the result is larger than or equal to the total number, determining that the lyric clause is sung; and if the result is smaller than the total number, determining that the lyric clause is not sung.
Preferably, after the step of uploading the tag list, the lyrics corresponding to the song to be recorded, and the recorded audio file to a server, the method further comprises: acquiring the mark list on the server, the lyrics corresponding to the song to be recorded and the recorded audio file; and executing a first preset operation on the lyrics corresponding to the starting time and the ending time in the mark list in the lyrics corresponding to the song to be recorded in a sentence dividing manner.
In order to solve the above problem, the present invention also discloses a chorus apparatus, comprising: the statistical module is used for determining the number of first type time segments in each time segment corresponding to the lyric clause aiming at each lyric clause when the lyric clause is recorded for the first time; monitoring a sound signal meeting a set frequency spectrum characteristic under the first type time slice; the judging module is used for judging whether the lyric clause is sung according to the number of the first type time segments; the mark list module is used for recording the starting time and the ending time corresponding to the lyric clause to a mark list when the judgment result of the judgment module indicates that the lyric clause is sung; and the sending module is used for uploading the mark list, the lyrics corresponding to the song to be recorded and the recorded audio file to a server when the recording is finished.
Preferably, the judging module includes: the first determining submodule is used for determining that the lyric clause is sung when the number of the first type time segments is greater than or equal to a preset value; and the second determining submodule is used for determining that the lyric clause is not sung when the number of the first type time segments is smaller than the preset value.
Preferably, the judging module further comprises: the total number determining submodule is used for determining the total number of time segments corresponding to the lyric clauses; and the third determining submodule is used for judging whether the lyric clause is sung according to the total number and the number of the first type time segments.
Preferably, the third determination submodule includes: the judging unit is used for comparing the result of multiplying the number of the first type time segments by a preset coefficient with the total number; the first determining unit is used for determining that the lyric clause is sung when the judgment result of the judging unit is larger than or equal to the total number; and the second determining unit is used for determining that the lyric clause is not sung when the judgment result of the judging unit is less than the total number.
Preferably, the apparatus further comprises: the acquisition module is used for acquiring the mark list on the server, the lyrics corresponding to the song to be recorded and the recorded audio file; and the first operation module is used for executing a first preset operation on the lyrics corresponding to the starting time and the ending time in the mark list in the lyrics corresponding to the song to be recorded in a sentence division manner.
Compared with the prior art, the invention has the following advantages:
according to the chorus scheme provided by the embodiment of the invention, the number of first type time segments in each time segment corresponding to the lyric clause is determined according to each lyric clause when the lyric clause is recorded for the first time, whether the lyric clause is sung is judged according to the number of the first type time segments, the starting time and the ending time corresponding to the lyric clause are recorded to a mark list when the lyric clause is judged to be sung, and the mark list, the lyrics corresponding to the song to be recorded and the recorded audio file are uploaded to a server when the recording is finished. Therefore, the chorus scheme provided by the embodiment of the invention judges each lyric clause, marks the sung lyric clause, and uploads the marks, the lyrics corresponding to the song to be recorded and the recorded audio file to the server together. The mark can be obtained when the chorus person downloads the recorded chorus semi-finished product through the client, and the client can judge which lyrics are sung and which are not sung through the mark, thereby differentiating and displaying the lyrics which are sung and not sung. Through the operation, when the user needs to sing jointly, the user can also know whether the lyric clause is sung, so that the lyric is prevented from being sung missed or being sung repeatedly, and the use experience of the user can be improved.
Drawings
FIG. 1 is a flow chart illustrating the steps of a chorus method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of the steps of a chorus method according to a second embodiment of the present invention;
fig. 3 is a block diagram of a chorus apparatus according to a third embodiment of the present invention;
fig. 4 is a block diagram of a chorus apparatus according to a fourth embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example one
Referring to fig. 1, a flowchart illustrating steps of a chorus method according to a first embodiment of the present invention is shown.
The chorus method of the embodiment of the invention comprises the following steps:
step 101: and when the first recording is carried out, aiming at each lyric clause, determining the number of the first type time segments in each time segment corresponding to the lyric clause.
Wherein, sound signals meeting the set frequency spectrum characteristics are monitored in the first type time segment.
The user initiating the chorus is recording for the first time with a clean and silent accompaniment. A song has N lyrics clauses, each of which has M time segments.
When the first recording is made, the user initiating the chorus selects the recorded passages according to personal preferences. One part of the lyric clause is sung, and the other part of the lyric clause is not sung. The singing application program is preset with a corresponding detection program, and the detection program can judge whether sound is input in each time segment according to the volume of the audio stream input by the microphone and the set frequency spectrum characteristics of human voice.
In the embodiment of the invention, the time segments with the set spectrum characteristics are set as the first type time segments, and the number of the first type time segments is counted.
Step 102: and judging whether the lyric clause is sung or not according to the number of the first type time segments.
And judging whether the lyric clause is sung according to a preset formula or a fixed preset value through the result of counting the number of the first type time segments.
Step 103: and when the lyric clause is judged to be sung, recording the starting time and the ending time corresponding to the lyric clause to a mark list.
And marking the time segment of the lyric clause sung, and recording the marked time segment into a mark list.
The tag list format is a binary. When a lyric clause which is sung is marked by adopting a binary group, the starting time and the ending time of the lyric clause are recorded so as to represent that the corresponding lyrics in the time period are the lyric part which is sung. Step 104: and when the recording is finished, uploading the mark list, the lyrics corresponding to the song to be recorded and the recorded audio file to a server.
After the user who initiates the chorus finishes recording, the recorded works are uploaded to a server, wherein the recorded works comprise a mark list, lyrics corresponding to the song to be recorded and a recorded audio file, and the chorus of the user who participates in the chorus is facilitated.
The chorus method provided by the embodiment of the invention judges each lyric clause when recording for the first time, marks the sung lyric clause, and uploads the marks, the lyrics corresponding to the song to be recorded and the recorded audio file to the server together. The mark can be obtained when the chorus person downloads the recorded chorus semi-finished product through the client, and the client can judge which lyrics are sung and which are not sung through the mark, thereby differentiating and displaying the lyrics which are sung and not sung. Through the operation, when the user needs to sing jointly, the user can also know whether the lyric clause is sung, so that the lyric is prevented from being sung missed or being sung repeatedly, and the use experience of the user can be improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of a chorus method according to a second embodiment of the present invention is shown.
The chorus method of the embodiment of the invention comprises the following steps:
step 201: and when the first recording is carried out, aiming at each lyric clause, determining the number of the first type time segments in each time segment corresponding to the lyric clause.
The user initiating the chorus is recording for the first time with a clean and silent accompaniment. A song is divided into N lyrics clauses, and each lyrics clause is divided into M time segments.
When the first recording is made, the user initiating the chorus selects the recorded passages according to personal preferences. One part of the lyric clause is sung, and the other part of the lyric clause is not sung. The singing software is preset with a corresponding detection program, and whether sound is recorded in each time segment is judged according to the volume of the audio stream input by the microphone and the frequency spectrum characteristics of the human voice.
And setting the time segments with the spectrum characteristics as first type time segments, and counting the number of the first type time segments.
Step 202: and determining the total number of time segments corresponding to the lyric clauses.
And after counting the number of the first type time segments, counting the total number of the time segments of the divided lyrics of the sentence.
Step 203: and judging whether the lyric clause is sung or not according to the total number and the number of the first type time segments.
A preferred way to determine whether the lyric clause was sung based on the total number and the number of the first type of time segments is as follows:
comparing the result of multiplying the number of the first type time segments by a preset coefficient with the total number; and if the result is greater than or equal to the total number, determining that the lyric clause is sung, and if the result is less than the total number, determining that the lyric clause is sung. For example, if a lyric clause corresponds to n time segments, and if m time segments are detected to have spectral characteristics, the lyric clause is indicated to be sung, the result of multiplying a preset coefficient by m is compared with the size of n, and when the result of multiplying the preset coefficient by m is greater than or equal to n, the lyric clause is indicated to be sung. When the multiplication result of the preset coefficient and m is less than n, it indicates that the lyric clause is not sung
It should be noted that, a person skilled in the art may set the preset coefficient according to actual needs, for example, the preset coefficient may be set as: 2. 3, 4, etc.
It should be noted that in the specific implementation process, it is not limited to determine whether the lyric clause is sung according to the total number and the number of the first type time segments, and may also determine whether the lyric clause is sung according to the number of the first type time segments, and the specific determination manner is as follows: when the number of the first type time segments is larger than or equal to a preset value, determining that the lyric clause is sung; and when the number of the first type time segments is smaller than a preset value, determining that the lyric clause is not sung.
The core of the specific scheme is that the number of the first type time segments is counted and then compared with a preset value, and whether the lyrics are sung or not is determined according to the compared result.
It should be noted that, a person skilled in the art may set the preset value according to actual needs, for example, the preset value may be set to 10, 20, 30, and the like, which is not limited thereto.
Step 204: and when the lyric clause is judged to be sung, recording the starting time and the ending time corresponding to the lyric clause to a mark list.
The tag list format is a binary. When a lyric clause which is sung is marked by adopting a binary group, the starting time and the ending time of the lyric clause are recorded so as to represent that the corresponding lyrics in the time period are the lyric part which is sung.
Step 205: and when the recording is finished, uploading the mark list, the lyrics corresponding to the song to be recorded and the recorded audio file to a server.
After the user who initiates the chorus finishes recording, the recorded works are uploaded to a server, wherein the recorded works comprise a mark list, lyrics corresponding to the song to be recorded and a recorded audio file, and the chorus of the user who participates in the chorus is facilitated.
Step 206: and acquiring a mark list on the server, the lyrics corresponding to the song to be recorded and the recorded audio file.
When the users participating in singing need to carry out chorus operation, the users participating in singing use the client to download the files uploaded by the users initiating singing.
Step 207: and executing a first preset operation on the lyrics corresponding to the starting time and the ending time in the mark list in the lyrics corresponding to the song to be recorded in a sentence dividing manner.
And the client of the user participating in the singing acquires the downloaded file, performs first preset operation on the lyrics in the song to be recorded in a sentence division manner by marking each starting time and each ending time in the list, and displays the lyrics to be recorded after the first preset operation is processed.
It should be noted that, the person skilled in the art sets the first preset operation according to actual needs. The first preset operation may be to dye the sung lyrics, highlight the sung lyrics, or lower the brightness of the sung lyrics, which is not limited in the embodiment of the present invention.
It should be noted that, in the song to be recorded, besides the first preset operation may be performed on the lyrics to be sung, the second preset operation may also be performed on the lyrics not to be sung, so as to distinguish the lyrics to be sung from the lyrics not to be sung in the song to be recorded.
The second preset operation is similar to the first preset operation, and may be to dye the lyrics that are not sung, highlight the lyrics that are not sung, or reduce the brightness of the lyrics that are not sung.
The chorus method provided by the embodiment of the invention judges each lyric clause when recording for the first time, marks the sung lyric clause, and uploads the marks, the lyrics corresponding to the song to be recorded and the recorded audio file to the server together. The mark can be obtained when the chorus person downloads the recorded chorus semi-finished product through the client, and the client can judge which lyrics are sung and which are not sung through the mark, thereby differentiating and displaying the lyrics which are sung and not sung. Through the operation, when the user needs to sing jointly, the user can also know whether the lyric clause is sung, so that the lyric is prevented from being sung missed or being sung repeatedly, and the use experience of the user can be improved.
EXAMPLE III
Referring to fig. 3, a block diagram of a chorus apparatus according to a third embodiment of the present invention is shown.
The chorus device of the embodiment of the invention comprises: the statistical module 301 is configured to determine, for each lyric clause, the number of first type time segments in each time segment corresponding to the lyric clause when performing first recording; monitoring a sound signal meeting a set frequency spectrum characteristic under the first type time slice; a judging module 302, configured to judge whether the lyric clause is sung according to the number of the first type time segments; a mark list module 303, configured to record a start time and an end time corresponding to the lyric clause to a mark list when the judgment result of the judgment module indicates that the lyric clause was sung; a sending module 304, configured to upload the tag list, song lyrics corresponding to a song to be recorded, and a recorded audio file to a server when the recording is finished.
The chorus device provided by the embodiment of the invention judges each lyric clause when recording for the first time, marks the sung lyric clause, and uploads the marks, the lyrics corresponding to the song to be recorded and the recorded audio file to the server together. The mark can be obtained when the chorus person downloads the recorded chorus semi-finished product through the client, and the client can judge which lyrics are sung and which are not sung through the mark, thereby differentiating and displaying the lyrics which are sung and not sung. Through the operation, when the user needs to sing jointly, the user can also know whether the lyric clause is sung, so that the lyric is prevented from being sung missed or being sung repeatedly, and the use experience of the user can be improved.
Example four
Referring to fig. 4, a block diagram of a chorus apparatus according to a fourth embodiment of the present invention is shown.
The chorus device of the embodiment of the invention comprises: the statistical module 401 is configured to determine, for each lyric clause, the number of first type time segments in each time segment corresponding to the lyric clause when performing first recording; monitoring a sound signal meeting a set frequency spectrum characteristic under the first type time slice; a judging module 402, configured to judge whether the lyric clause is sung according to the number of the first type time segments; a mark list module 403, configured to record a start time and an end time corresponding to the lyric clause to a mark list when the judgment result of the judgment module indicates that the lyric clause was sung; a sending module 404, configured to upload the tag list, lyrics corresponding to a song to be recorded, and a recorded audio file to a server when the recording is finished.
Preferably, the determining module 402 comprises: the first determining sub-module 4021 is configured to determine that the lyric clause was sung when the number of the first type time segments is greater than or equal to a preset value; the second determining sub-module 4022 is configured to determine that the lyric clause has not been sung when the number of the first type time segments is smaller than the preset value.
Preferably, the determining module 402 further comprises: a total number determining submodule 4023, configured to determine a total number of time segments corresponding to the lyric clause; a third determining sub-module 4024, configured to determine whether the lyric clause was sung according to the total number and the number of the first type time segments.
Preferably, the third determining sub-module 4024 includes: the judging unit 40241 is configured to compare a result of multiplying the number of the first type time slices by a preset coefficient with the total number; a first determining unit 40242, configured to determine that the lyric clause was sung when the determination result of the determining unit is greater than or equal to the total number; a second determining unit 40243, configured to determine that the lyric clause has not been sung when the determination result of the determining unit is less than the total number.
Preferably, the apparatus further comprises: an obtaining module 405, configured to obtain the tag list on the server, lyrics corresponding to a song to be recorded, and a recorded audio file; a first operation module 406, configured to perform a first preset operation on the lyrics corresponding to the start time and the end time in the tag list in the lyrics corresponding to the song to be recorded in a sentence division manner.
The chorus device provided by the embodiment of the invention judges each lyric clause when recording for the first time, marks the sung lyric clause, and uploads the marks, the lyrics corresponding to the song to be recorded and the recorded audio file to the server together. The mark can be obtained when the chorus person downloads the recorded chorus semi-finished product through the client, and the client can judge which lyrics are sung and which are not sung through the mark, thereby differentiating and displaying the lyrics which are sung and not sung. Through the operation, when the user needs to sing jointly, the user can also know whether the lyric clause is sung, so that the lyric is prevented from being sung missed or being sung repeatedly, and the use experience of the user can be improved.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The chorus method and apparatus provided by the present invention are introduced in detail, and the principle and the implementation mode of the present invention are explained by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A chorus method, the method comprising:
determining the number of first type time segments in each time segment corresponding to each lyric clause when the lyric clause is recorded for the first time; monitoring a sound signal meeting a set frequency spectrum characteristic under the first type time slice;
judging whether the lyric clause is sung or not according to the number of the first type time segments;
when the lyric clause is judged to be sung, recording the starting time and the ending time corresponding to the lyric clause to a mark list;
when the recording is finished, uploading the mark list, the lyrics corresponding to the song to be recorded and the recorded audio file to a server;
wherein, judging whether the lyric clause is sung according to the number of the first type time segments specifically comprises:
and counting the number of the first type time segments, and judging whether the lyric clause is sung according to the counting result and a preset formula or a fixed preset value.
2. The method of claim 1, wherein the step of determining whether the lyric clause was sung based on the number of the first type time segments comprises:
when the number of the first type time segments is larger than or equal to a preset value, determining that the lyric clause is sung;
and when the number of the first type time segments is smaller than the preset value, determining that the lyric clause is not sung.
3. The method of claim 1, wherein the step of determining whether the lyric clause was sung based on the number of the first type time segments comprises:
determining the total number of time segments corresponding to the lyric clauses;
and judging whether the lyric clause is sung or not according to the total number and the number of the first type time segments.
4. The method of claim 3, wherein the step of determining whether the lyric clause was sung based on the total number and the number of the first type time segments comprises:
comparing the result of multiplying the number of the first type time segments by a preset coefficient with the total number; if the result is larger than or equal to the total number, determining that the lyric clause is sung;
and if the result is smaller than the total number, determining that the lyric clause is not sung.
5. The method of claim 1, wherein after the step of uploading the tag list, corresponding lyrics of the song to be recorded, and the recorded audio file to a server, the method further comprises:
acquiring the mark list on the server, the lyrics corresponding to the song to be recorded and the recorded audio file;
and executing a first preset operation on the lyrics corresponding to the starting time and the ending time in the mark list in the lyrics corresponding to the song to be recorded in a sentence dividing manner.
6. A chorus apparatus, the apparatus comprising:
the statistical module is used for determining the number of first type time segments in each time segment corresponding to the lyric clause aiming at each lyric clause when the lyric clause is recorded for the first time; monitoring a sound signal meeting a set frequency spectrum characteristic under the first type time slice;
the judging module is used for judging whether the lyric clause is sung according to the number of the first type time segments, and specifically comprises: counting the number of the first type time segments, and judging whether the lyric clause is sung according to a counting result and a preset formula or a fixed preset value;
the mark list module is used for recording the starting time and the ending time corresponding to the lyric clause to a mark list when the judgment result of the judgment module indicates that the lyric clause is sung;
and the sending module is used for uploading the mark list, the lyrics corresponding to the song to be recorded and the recorded audio file to a server when the recording is finished.
7. The apparatus of claim 6, wherein the determining module comprises:
the first determining submodule is used for determining that the lyric clause is sung when the number of the first type time segments is greater than or equal to a preset value;
and the second determining submodule is used for determining that the lyric clause is not sung when the number of the first type time segments is smaller than the preset value.
8. The apparatus of claim 6, wherein the determining module further comprises:
the total number determining submodule is used for determining the total number of time segments corresponding to the lyric clauses;
and the third determining submodule is used for judging whether the lyric clause is sung according to the total number and the number of the first type time segments.
9. The apparatus of claim 8, wherein the third determination submodule comprises:
the judging unit is used for comparing the result of multiplying the number of the first type time segments by a preset coefficient with the total number;
the first determining unit is used for determining that the lyric clause is sung when the judgment result of the judging unit is larger than or equal to the total number;
and the second determining unit is used for determining that the lyric clause is not sung when the judgment result of the judging unit is less than the total number.
10. The apparatus of claim 6, further comprising:
the acquisition module is used for acquiring the mark list on the server, the lyrics corresponding to the song to be recorded and the recorded audio file;
and the first operation module is used for executing a first preset operation on the lyrics corresponding to the starting time and the ending time in the mark list in the lyrics corresponding to the song to be recorded in a sentence division manner.
CN201710179795.0A 2017-03-23 2017-03-23 Chorus method and apparatus Active CN108630240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710179795.0A CN108630240B (en) 2017-03-23 2017-03-23 Chorus method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710179795.0A CN108630240B (en) 2017-03-23 2017-03-23 Chorus method and apparatus

Publications (2)

Publication Number Publication Date
CN108630240A CN108630240A (en) 2018-10-09
CN108630240B true CN108630240B (en) 2020-05-26

Family

ID=63707519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710179795.0A Active CN108630240B (en) 2017-03-23 2017-03-23 Chorus method and apparatus

Country Status (1)

Country Link
CN (1) CN108630240B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109981893B (en) * 2019-02-28 2021-05-14 广州酷狗计算机科技有限公司 Lyric display method and device
CN110349559A (en) * 2019-07-12 2019-10-18 广州酷狗计算机科技有限公司 Carry out audio synthetic method, device, system, equipment and storage medium
CN111475672B (en) * 2020-03-27 2023-12-08 咪咕音乐有限公司 Lyric distribution method, electronic equipment and storage medium
CN112130727B (en) * 2020-09-29 2022-02-01 杭州网易云音乐科技有限公司 Chorus file generation method, apparatus, device and computer readable storage medium
CN116704978A (en) * 2022-02-28 2023-09-05 北京字跳网络技术有限公司 Music generation method, device, apparatus, storage medium, and program

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101312065A (en) * 2007-05-21 2008-11-26 索尼株式会社 Content display method, content display apparatus, recording medium, and server apparatus
CN101465120A (en) * 2007-12-17 2009-06-24 索尼株式会社 Method for music structure analysis
CN103295568A (en) * 2013-05-30 2013-09-11 北京小米科技有限责任公司 Asynchronous chorusing method and asynchronous chorusing device
CN105006234A (en) * 2015-05-27 2015-10-28 腾讯科技(深圳)有限公司 Karaoke processing method and apparatus
CN105023559A (en) * 2015-05-27 2015-11-04 腾讯科技(深圳)有限公司 Karaoke processing method and system
CN105047203A (en) * 2015-05-25 2015-11-11 腾讯科技(深圳)有限公司 Audio processing method, device and terminal
CN105118500A (en) * 2015-06-05 2015-12-02 福建凯米网络科技有限公司 Singing evaluation method, system and terminal
CN106448630A (en) * 2016-09-09 2017-02-22 腾讯科技(深圳)有限公司 Method and device for generating digital music file of song

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050137881A1 (en) * 2003-12-17 2005-06-23 International Business Machines Corporation Method for generating and embedding vocal performance data into a music file format

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101312065A (en) * 2007-05-21 2008-11-26 索尼株式会社 Content display method, content display apparatus, recording medium, and server apparatus
CN101465120A (en) * 2007-12-17 2009-06-24 索尼株式会社 Method for music structure analysis
CN103295568A (en) * 2013-05-30 2013-09-11 北京小米科技有限责任公司 Asynchronous chorusing method and asynchronous chorusing device
CN105047203A (en) * 2015-05-25 2015-11-11 腾讯科技(深圳)有限公司 Audio processing method, device and terminal
CN105006234A (en) * 2015-05-27 2015-10-28 腾讯科技(深圳)有限公司 Karaoke processing method and apparatus
CN105023559A (en) * 2015-05-27 2015-11-04 腾讯科技(深圳)有限公司 Karaoke processing method and system
CN105118500A (en) * 2015-06-05 2015-12-02 福建凯米网络科技有限公司 Singing evaluation method, system and terminal
CN106448630A (en) * 2016-09-09 2017-02-22 腾讯科技(深圳)有限公司 Method and device for generating digital music file of song

Also Published As

Publication number Publication date
CN108630240A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN108630240B (en) Chorus method and apparatus
US11461389B2 (en) Transitions between media content items
TWI576822B (en) Processing method of making song request and system thereof
CN104282322B (en) A kind of mobile terminal and its method and apparatus for identifying song climax parts
US9653056B2 (en) Evaluation of beats, chords and downbeats from a musical audio signal
WO2018045988A1 (en) Method and device for generating digital music score file of song, and storage medium
CN102308295A (en) Music profiling
WO2009038316A2 (en) The karaoke system which has a song studying function
CN110335625A (en) The prompt and recognition methods of background music, device, equipment and medium
US10235898B1 (en) Computer implemented method for providing feedback of harmonic content relating to music track
JP2015525895A (en) Audio signal analysis
GB2522644A (en) Audio signal analysis
WO2016189307A1 (en) Audio identification method
WO2020015411A1 (en) Method and device for training adaptation level evaluation model, and method and device for evaluating adaptation level
US9037278B2 (en) System and method of predicting user audio file preferences
CN101179347A (en) Method, system and service terminal of providing text file information
CN103871433B (en) A kind of control method and electronic equipment
CN112037739B (en) Data processing method and device and electronic equipment
CN108628886B (en) Audio file recommendation method and device
KR101547525B1 (en) Automatic music selection apparatus and method considering user input
EP3644306B1 (en) Methods for analyzing musical compositions, computer-based system and machine readable storage medium
Tsai et al. Automatic Singing Performance Evaluation Using Accompanied Vocals as Reference Bases.
Bhatia et al. Analysis of audio features for music representation
US11308199B2 (en) User authentication method using ultrasonic waves
CN106445964B (en) Method and device for processing audio information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant