CN106601220A - Method and device for recording antiphonal singing of multiple persons - Google Patents

Method and device for recording antiphonal singing of multiple persons Download PDF

Info

Publication number
CN106601220A
CN106601220A CN201611123170.4A CN201611123170A CN106601220A CN 106601220 A CN106601220 A CN 106601220A CN 201611123170 A CN201611123170 A CN 201611123170A CN 106601220 A CN106601220 A CN 106601220A
Authority
CN
China
Prior art keywords
segment
singer
audio
played
frequency information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611123170.4A
Other languages
Chinese (zh)
Inventor
张新亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TVMining Beijing Media Technology Co Ltd
Original Assignee
TVMining Beijing Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TVMining Beijing Media Technology Co Ltd filed Critical TVMining Beijing Media Technology Co Ltd
Priority to CN201611123170.4A priority Critical patent/CN106601220A/en
Publication of CN106601220A publication Critical patent/CN106601220A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/365Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems the accompaniment information being stored on a host computer and transmitted to a reproducing terminal by means of a network, e.g. public telephone lines
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel

Abstract

The invention discloses a method and a device for recording antiphonal singing of multiple persons. The method for recording antiphonal singing of multiple persons comprises the following steps: making sure that n singers have gone into a preset virtual music room; making m sections of an accompaniment song to be played correspond to the m singers according to the sentence segmentation structure of the accompaniment song to be played; making audio information of each of the m sections according to the sentence segmentation structure of the accompaniment song to be played; splicing the recorded audio of the m sections into an antiphonal singing audio file of the n singers; and playing the antiphonal singing audio file to the n players. According to the invention, antiphonal singing can be recorded without the need for multiple singers to go to the same geographic recording place at the same time, high-quality antiphonal singing can be recorded through the network, recording of antiphonal singing is much more convenient, and the user experience is enhanced.

Description

It is a kind of to record the method and device that many people sing by turns
Technical field
The present invention relates to intelligent audio technical field, more particularly to a kind of to record the method and device that many people sing by turns.
Background technology
In some cases, it is desirable to multiple singers record together many people sings by turns song, per in many people sing by turns song Singer is required for singing the part in the song.But, in actual life, sometimes same in section at the same time Multiple singers are convened in one geographic recording place, is a highly difficult thing.It is substantial amounts of necessary not only for consuming Material resources cost and human cost, and the plurality of singer sometimes do not had within the same time period same geographic The possibility assembled in recording place.In the prior art, although have the technical scheme of network REC song miscellaneous, but It is not solve the technical scheme that many people sing by turns well.How appropriate process the problems referred to above, just become industry urgently The problem of solution.
The content of the invention
The present invention provides a kind of recording method and device that many people sing by turns, and to multiple singers same time arrival is difficult to Same geographic recording place carries out recording many people sings by turns, so that it may goes out high-quality many people by network REC and sings by turns.
First aspect according to embodiments of the present invention, there is provided the method that a kind of many people of recording sing by turns, including:
Confirm that n singer is had been introduced in default virtual music room;
According to the punctuate structure of accompanying song to be played, m segment of accompanying song to be played is corresponded to into institute State n singer;
According to the punctuate structure of the accompanying song to be played, each in the m segment is produced respectively The audio-frequency information of segment;
Audio splicing in the m segment recorded out is sung by turns into audio file into the n singer;
To the n singer play described in sing by turns audio file.
In one embodiment, n singer of the confirmation is had been introduced in default virtual music room, including:
Confirm whether the audio input and output function of the terminal device that the n singer is used are perfect;
Confirmation receives the confirmation in place of each singer in the n singer.
In one embodiment, the punctuate structure according to accompanying song to be played, by accompanying song to be played M segment correspond to the n singer, including:
According to the punctuate structure of accompanying song to be played, accompanying song to be played is divided into into m segment;
The many-to-one mapping relations set up between the m segment and the n singer, wherein the number of the m Numerical value of the value more than the n.
In one embodiment, the punctuate structure according to the accompanying song to be played, produces respectively the m The audio-frequency information of each segment in individual segment, including:
After the ready information of each singer in the n singer is received, the m area is obtained Between many-to-one mapping relations between section and the n singer;
According to the mapping relations, record the corresponding singer's of each segment in the m segment respectively Audio-frequency information;
The audio frequency of the audio-frequency information and accompanying song that mix the singer of each segment in the m segment is believed Breath, and confirm the audio-frequency information that the audio-frequency information is each segment in the m segment produced.
In one embodiment, the audio splicing by the m segment recorded out is into the n singer Sing by turns audio file, including:
The audio-frequency information of the mixed m segment is carried out according to the time order and function relation of the m segment Splicing;
Confirm that the spliced audio file is that the n singer sings by turns audio file.
Second aspect according to embodiments of the present invention, there is provided the device that a kind of many people of recording sing by turns, including:
Module is confirmed, for confirming that n singer is had been introduced in default virtual music room;
Respective modules, for according to the punctuate structure of accompanying song to be played, by m area of accompanying song to be played Between section correspond to the n singer;
Module is made, it is interval for according to the punctuate structure of the accompanying song to be played, producing described m respectively The audio-frequency information of each segment in section;
Concatenation module, for by audio splicing the singing by turns into the n singer in the m segment recorded out Audio file;
Playing module, for the n singer play described in sing by turns audio file.
In one embodiment, the confirmation module, including:
First confirms submodule, for confirming audio input and the output of the terminal device that the n singer is used Whether function is perfect;
Second confirms submodule, for confirming to receive the confirmation in place of each singer in the n singer Information.
In one embodiment, the respective modules, including:
Submodule is divided, for according to the punctuate structure of accompanying song to be played, accompanying song to be played being divided For m segment;
Mapping submodule, the many-to-one mapping for setting up between the m segment and the n singer is closed System, wherein numerical value of the numerical value of the m more than the n.
In one embodiment, the making module, including:
Acquisition submodule, for each singer in the n singer is received ready information it Afterwards, the many-to-one mapping relations between the m segment and the n singer are obtained;
Submodule is recorded, for according to the mapping relations, each segment in the m segment being recorded respectively The audio-frequency information of corresponding singer;
Mixing submodule, for mixing the m segment in each segment singer audio-frequency information with The audio-frequency information of accompanying song, and confirm that the audio-frequency information is each segment in the m segment produced Audio-frequency information.
In one embodiment, the concatenation module, including:
Splicing submodule, for by the audio-frequency information of the mixed m segment according to the m segment when Between precedence relationship spliced;
3rd confirms submodule, for confirming that the spliced audio file is that the n singer sings by turns audio frequency File.
Other features and advantages of the present invention will be illustrated in the following description, also, the partly change from specification Obtain it is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages can be by the explanations write Specifically noted structure is realizing and obtain in book, claims and accompanying drawing.
Below by drawings and Examples, technical scheme is described in further detail.
Description of the drawings
Accompanying drawing is used for providing a further understanding of the present invention, and constitutes a part for specification, the reality with the present invention Applying example is used to explain the present invention together, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the flow chart of the method that many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns;
Fig. 2 be many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns method the step of S11 flow process Figure;
Fig. 3 be many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns method the step of S12 flow process Figure;
Fig. 4 be many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns method the step of S13 flow process Figure;
Fig. 5 be many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns method the step of S14 flow process Figure;
Fig. 6 is the block diagram of the device that many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns;
Fig. 7 is the frame of the confirmation module 61 of the device that many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns Figure;
Fig. 8 is the frame of the respective modules 62 of the device that many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns Figure;
Fig. 9 is the frame of the making module 63 of the device that many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns Figure;
Figure 10 is the concatenation module 64 of the device that many people of a kind of recording shown in an exemplary embodiment of the invention sing by turns Block diagram.
Specific embodiment
The preferred embodiments of the present invention are illustrated below in conjunction with accompanying drawing, it will be appreciated that preferred reality described herein Apply example and be merely to illustrate and explain the present invention, be not intended to limit the present invention.
Fig. 1 is the method flow diagram that many people of a kind of recording according to an exemplary embodiment sing by turns, as shown in figure 1, The method that many people of the recording sing by turns, comprises the following steps S11-S15:
In step s 11, confirm that n singer is had been introduced in default virtual music room;
In step s 12, according to the punctuate structure of accompanying song to be played, by m area of accompanying song to be played Between section correspond to the n singer;
In step s 13, according to the punctuate structure of the accompanying song to be played, produce respectively described m it is interval The audio-frequency information of each segment in section;
In step S14, by audio splicing the singing by turns into the n singer in the m segment recorded out Audio file;
In step S15, to the n singer play described in sing by turns audio file.
In one embodiment, in some cases, it is desirable to multiple singers record together many people sings by turns song, in many people Sing by turns every singer in song to be required for singing the part in the song.But, in actual life, sometimes same Multiple singers are convened in same geographic recording place in the individual time period, is a highly difficult thing.Not only Need to consume substantial amounts of material resources cost and human cost, and the plurality of singer was not sometimes had within the same time period The possibility assembled in same geographic recording place.In the prior art, although have network REC miscellaneous to sing Bent technical scheme, but the technical scheme that many people sing by turns is not solved well.Technical scheme in the present embodiment can be appropriate Kind process the problems referred to above.
Detailed step is as follows, confirms that n singer is had been introduced in default virtual music room.Further, confirm Whether the audio input and output function of the terminal device that the n singer is used be perfect.Confirmation receives the n singer In each singer confirmation in place.
According to the punctuate structure of accompanying song to be played, m segment of accompanying song to be played is corresponded to into the n Individual singer.Further, according to the punctuate structure of accompanying song to be played, accompanying song to be played is divided into into m Segment.The many-to-one mapping relations set up between the m segment and the n singer, the wherein numerical value of the m are more than should The numerical value of n.
According to the punctuate structure of the accompanying song to be played, each produced respectively in the m segment is interval The audio-frequency information of section.Further, after the ready information of each singer in the n singer is received, Obtain the many-to-one mapping relations between the m segment and the n singer.According to the mapping relations, the m is recorded respectively The audio-frequency information of the corresponding singer of each segment in individual segment.Each mixed in the m segment is interval The audio-frequency information and the audio-frequency information of accompanying song of the singer of section, and confirm that the audio-frequency information is that this m for producing is interval The audio-frequency information of each segment in section.
Audio splicing in the m segment recorded out is sung by turns into audio file into the n singer.After mixing The audio-frequency information of the m segment spliced according to the time order and function relation of the m segment.Confirm the spliced sound Frequency file is that the n singer sings by turns audio file.
In addition, can also play this to the n singer and sing by turns audio file.
Technical scheme in the present embodiment can reach same geographic recording the same time without the need for multiple singers Place carries out recording many people sings by turns, and goes out high-quality many people by network REC and sings by turns, and is greatly lifted and records what many people sang by turns Convenient degree, so as to improve Consumer's Experience.
In one embodiment, as shown in Fig. 2 step S11 comprises the steps S21-S22:
In the step s 21, whether the audio input and output function of the terminal device that the confirmation n singer is used It is perfect;
In step S22, confirmation receives the confirmation in place of each singer in the n singer.
In one embodiment, each in the n singer can be tested out by default audio frequency acceptance test program Whether any one or many persons of the audio input of the terminal device that singer is used and output function be perfect.Meanwhile, by connecing The confirmation in place sent out by each singer in the n singer is come into determining the n singer To default virtual music room, and the function of terminal device that used of each singer in the n singer is complete Kind property, being sung by turns with many people for guaranteeing follow-up can be smoothed out recording.
In one embodiment, as shown in figure 3, step S12 comprises the steps S31-S32:
In step S31, according to the punctuate structure of accompanying song to be played, accompanying song to be played is divided into into m Individual segment;
In step s 32, the many-to-one mapping relations set up between the m segment and the n singer, its Described in m numerical value more than the n numerical value.
In one embodiment, typically every song all includes many lyrics, in each sentence or per several lyrics Between be usually constructed with pause slightly.So song to be played can be divided according to the punctuate structure of the song to be played For m segment.For same song, the version with singer's audio frequency is in terms of music rhythm with accompanying song version Consistent, i.e., the punctuate structure of the version that should carry singer's audio frequency is equal to the punctuate structure of the accompaniment version of the song.Therefore According to the punctuate structure of accompanying song to be played, accompanying song to be played is divided into into m segment.Set up the m area Between many-to-one mapping relations between section and the n singer, wherein numerical value of the numerical value of the m more than the n.The n is drilled The quantity of the segment that any one singer in the person of singing is sung is more than or equal to 1.Also include the n in the m segment The segment of at least two singers chorus in singer.The many-to-one mapping relations can be repaiied by manual type Change.
In one embodiment, as shown in figure 4, step S13 comprises the steps S41-S43:
In step S41, after the ready information of each singer in the n singer is received, Obtain the many-to-one mapping relations between the m segment and the n singer;
In step S42, according to the mapping relations, each segment pair in the m segment is recorded respectively The audio-frequency information of the singer for answering;
In step S43, mix audio-frequency information and the accompaniment of the singer of each segment in the m segment The audio-frequency information of song, and confirm the sound that the audio-frequency information is each segment in the m segment produced Frequency information.
In one embodiment, receive each singer's in n singer in default virtual music room After ready information, need to obtain between the m segment in above-described embodiment and the n singer many-to-one reflects Penetrate relation.According to the mapping relations, the audio frequency of the corresponding singer of each segment in the m segment is recorded out respectively Information.Meanwhile, in any one segment in m segment, shield all singers outside corresponding singer and produce Raw audio-frequency information.According to the time information of accompanying song to be played, mix each segment in the m segment The audio-frequency information of singer and the audio-frequency information of accompanying song, and confirm that the mixed audio-frequency information is the m for producing The audio-frequency information of each segment in segment.
In one embodiment, as shown in figure 5, step S4 comprises the steps S51-S52:
In step s 51, by the audio-frequency information of the mixed m segment according to the m segment time Precedence relationship is spliced;
In step S52, confirm that the spliced audio file is that the n singer sings by turns audio file.
In one embodiment, by the audio-frequency information of the mixed m segment according to the m segment time elder generation Afterwards relation is spliced, and confirms that the spliced audio file is that the n singer sings by turns audio file.For example, mix The audio-frequency information of 20 segments after conjunction, the audio-frequency information of 2 segments is carried out after many people sing by turns by 10 singers Produce.Spliced according to the time order and function relation of 20 segments in the accompanying song, one is formed after splicing Individual independent audio-frequency information, confirms that the independent audio-frequency information is that 10 singers sing by turns audio file.
In one embodiment, Fig. 6 is the device frame that many people of a kind of recording according to an exemplary embodiment sing by turns Figure.As Fig. 6 shows, the device includes validating that module 61, respective modules 62, makes module 63, concatenation module 64 and playing module 65.
The confirmation module 61, for confirming that n singer is had been introduced in default virtual music room;
The respective modules 62, for according to the punctuate structure of accompanying song to be played, by the m of accompanying song to be played Individual segment corresponds to the n singer;
The making module 63, for according to the punctuate structure of the accompanying song to be played, producing the m respectively The audio-frequency information of each segment in segment;
The concatenation module 64, for by the audio splicing in the m segment recorded out into the n singer's Sing by turns audio file;
The playing module 65, for the n singer play described in sing by turns audio file.
As shown in fig. 7, the confirmation module 61 includes that first confirms that submodule 71 and second confirms submodule 72.
This first confirms submodule 71, for confirm the audio input of the terminal device that the n singer is used with Whether output function is perfect;
This second confirms submodule 72, for confirming to receive the in place of each singer in the n singer Confirmation.
As shown in figure 8, the respective modules 62 include dividing submodule 81 and mapping submodule 82.
The division submodule 81, for according to the punctuate structure of accompanying song to be played, accompaniment to be played being sung Song is divided into m segment;
The mapping submodule 82, for setting up the many-to-one mapping between the m segment and the n singer Relation, wherein numerical value of the numerical value of the m more than the n.
As shown in figure 9, the making module 63 includes acquisition submodule 91, records submodule 92 and mixing submodule 93.
The acquisition submodule 91, for the ready letter of each singer in the n singer is received After breath, the many-to-one mapping relations between the m segment and the n singer are obtained;
The recording submodule 92, for according to the mapping relations, each area in the m segment being recorded respectively Between the corresponding singer of section audio-frequency information;
The mixing submodule 93, for mixing the m segment in each segment singer audio frequency letter The audio-frequency information of breath and accompanying song, and confirm that the audio-frequency information is each area in the m segment produced Between section audio-frequency information.
As shown in Figure 10, the concatenation module 64 includes that splicing submodule 101 and the 3rd confirms submodule 102.
The splicing submodule 101, for the audio-frequency information of the mixed m segment is interval according to described m The time order and function relation of section is spliced;
3rd confirms submodule 102, for confirming that the spliced audio file is singing by turns for the n singer Audio file.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program Product.Therefore, the present invention can be using complete hardware embodiment, complete software embodiment or with reference to the reality in terms of software and hardware Apply the form of example.And, the present invention can be adopted and wherein include the computer of computer usable program code at one or more The shape of the computer program implemented in usable storage medium (including but not limited to magnetic disc store and optical memory etc.) Formula.
The present invention is the flow process with reference to method according to embodiments of the present invention, equipment (system) and computer program Figure and/or block diagram are describing.It should be understood that can be by computer program instructions flowchart and/or each stream in block diagram The combination of journey and/or square frame and flow chart and/or the flow process in block diagram and/or square frame.These computer programs can be provided The processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that produced for reality by the instruction of computer or the computing device of other programmable data processing devices The device of the function of specifying in present one flow process of flow chart or one square frame of multiple flow processs and/or block diagram or multiple square frames.
These computer program instructions may be alternatively stored in can guide computer or other programmable data processing devices with spy In determining the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory is produced to be included referring to Make the manufacture of device, the command device realize in one flow process of flow chart or one square frame of multiple flow processs and/or block diagram or The function of specifying in multiple square frames.
These computer program instructions also can be loaded in computer or other programmable data processing devices so that in meter Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented process, so as in computer or The instruction performed on other programmable devices is provided for realizing in one flow process of flow chart or multiple flow processs and/or block diagram one The step of function of specifying in individual square frame or multiple square frames.
Obviously, those skilled in the art can carry out the essence of various changes and modification without deviating from the present invention to the present invention God and scope.So, if these modifications of the present invention and modification belong to the scope of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to comprising these changes and modification.

Claims (10)

1. it is a kind of to record the method that many people sing by turns, it is characterised in that to include:
Confirm that n singer is had been introduced in default virtual music room;
According to the punctuate structure of accompanying song to be played, m segment of accompanying song to be played is corresponded to into the n Singer;
According to the punctuate structure of the accompanying song to be played, each produced respectively in the m segment is interval The audio-frequency information of section;
Audio splicing in the m segment recorded out is sung by turns into audio file into the n singer;
To the n singer play described in sing by turns audio file.
2. the method for claim 1, it is characterised in that n singer of the confirmation has been introduced into default virtual In music room, including:
Confirm whether the audio input and output function of the terminal device that the n singer is used are perfect;
Confirmation receives the confirmation in place of each singer in the n singer.
3. the method for claim 1, it is characterised in that the punctuate structure according to accompanying song to be played, will M segment of accompanying song to be played corresponds to the n singer, including:
According to the punctuate structure of accompanying song to be played, accompanying song to be played is divided into into m segment;
The many-to-one mapping relations set up between the m segment and the n singer, wherein the numerical value of the m is big In the numerical value of the n.
4. the method for claim 1, it is characterised in that the punctuate according to the accompanying song to be played is tied Structure, produces respectively the audio-frequency information of each segment in the m segment, including:
After the ready information of each singer in the n singer is received, the m segment is obtained With the many-to-one mapping relations between the n singer;
According to the mapping relations, the audio frequency of the corresponding singer of each segment in the m segment is recorded respectively Information;
Mix the audio-frequency information and the audio-frequency information of accompanying song of the singer of each segment in the m segment, And confirm audio-frequency information that the audio-frequency information is each segment in the m segment produced.
5. method as claimed in claim 4, it is characterised in that the audio frequency by the m segment recorded out is spelled Be connected into the n singer sings by turns audio file, including:
The audio-frequency information of the mixed m segment is spliced according to the time order and function relation of the m segment;
Confirm that the spliced audio file is that the n singer sings by turns audio file.
6. it is a kind of to record the device that many people sing by turns, it is characterised in that to include:
Module is confirmed, for confirming that n singer is had been introduced in default virtual music room;
Respective modules, for according to the punctuate structure of accompanying song to be played, by m segment of accompanying song to be played Correspond to the n singer;
Module is made, for according to the punctuate structure of the accompanying song to be played, during the m segment is produced respectively Each segment audio-frequency information;
Concatenation module, for the audio splicing in the m segment recorded out to be sung by turns into audio frequency into the n singer File;
Playing module, for the n singer play described in sing by turns audio file.
7. device according to claim 6, it is characterised in that the confirmation module, including:
First confirms submodule, for confirming the audio input and output function of the terminal device that the n singer is used It is whether perfect;
Second confirms submodule, for confirming to receive the confirmation in place of each singer in the n singer.
8. device according to claim 6, it is characterised in that the respective modules, including:
Submodule is divided, for according to the punctuate structure of accompanying song to be played, accompanying song to be played being divided into into m Segment;
Mapping submodule, for setting up the many-to-one mapping relations between the m segment and the n singer, its Described in m numerical value more than the n numerical value.
9. device according to claim 6, it is characterised in that the making module, including:
Acquisition submodule, after the ready information for each singer in the n singer is received, obtains Take the many-to-one mapping relations between the m segment and the n singer;
Submodule is recorded, for according to the mapping relations, each the segment correspondence in the m segment being recorded respectively Singer audio-frequency information;
Mixing submodule, for mixing the m segment in each segment singer audio-frequency information and accompaniment The audio-frequency information of song, and confirm the sound that the audio-frequency information is each segment in the m segment produced Frequency information.
10. device according to claim 9, it is characterised in that the concatenation module, including:
Splicing submodule, for by the audio-frequency information of the mixed m segment according to the m segment time elder generation Afterwards relation is spliced;
3rd confirms submodule, for confirming that the spliced audio file is that the n singer sings by turns audio file.
CN201611123170.4A 2016-12-08 2016-12-08 Method and device for recording antiphonal singing of multiple persons Pending CN106601220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611123170.4A CN106601220A (en) 2016-12-08 2016-12-08 Method and device for recording antiphonal singing of multiple persons

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611123170.4A CN106601220A (en) 2016-12-08 2016-12-08 Method and device for recording antiphonal singing of multiple persons

Publications (1)

Publication Number Publication Date
CN106601220A true CN106601220A (en) 2017-04-26

Family

ID=58598571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611123170.4A Pending CN106601220A (en) 2016-12-08 2016-12-08 Method and device for recording antiphonal singing of multiple persons

Country Status (1)

Country Link
CN (1) CN106601220A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665703A (en) * 2017-09-11 2018-02-06 上海与德科技有限公司 The audio synthetic method and system and remote server of a kind of multi-user
CN110149528A (en) * 2019-05-21 2019-08-20 北京字节跳动网络技术有限公司 A kind of process method for recording, device, system, electronic equipment and storage medium
CN112312163A (en) * 2020-10-30 2021-02-02 北京字跳网络技术有限公司 Video generation method and device, electronic equipment and storage medium
CN112489610A (en) * 2020-11-10 2021-03-12 北京小唱科技有限公司 Intelligent chorus method and device
CN112567758A (en) * 2018-06-15 2021-03-26 思妙公司 Audio-visual live streaming system and method with latency management and social media type user interface mechanism

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1681270A (en) * 2004-04-08 2005-10-12 归海信息技术集成(上海)有限公司 System and method for realizing antiphonal singing and KTV of karaoke by Internet
CN1735028A (en) * 2004-08-31 2006-02-15 张旺 Method and device for realizing real-time Kala OK singing based on network musical hall
JP2007121913A (en) * 2005-10-31 2007-05-17 Brother Ind Ltd Karaoke system
CN101282257A (en) * 2007-04-05 2008-10-08 丰行互动科技股份有限公司 Method for implementing real time multi-human carol image and sound system using network
CN102456340A (en) * 2010-10-19 2012-05-16 盛大计算机(上海)有限公司 Karaoke in-pair singing method based on internet and system thereof
CN103021401A (en) * 2012-12-17 2013-04-03 上海音乐学院 Internet-based multi-people asynchronous chorus mixed sound synthesizing method and synthesizing system
CN103137124A (en) * 2013-02-04 2013-06-05 武汉今视道电子信息科技有限公司 Voice synthesis method
CN103228065A (en) * 2013-04-09 2013-07-31 天脉聚源(北京)传媒科技有限公司 Mobile equipment based on Wi-Fi, and method and system of mobile equipment for networking
CN103295568A (en) * 2013-05-30 2013-09-11 北京小米科技有限责任公司 Asynchronous chorusing method and asynchronous chorusing device
CN103310822A (en) * 2013-06-08 2013-09-18 泉州天籁时空文化传播有限公司 Method for practicing choral songs anytime and anywhere
CN103337240A (en) * 2013-06-24 2013-10-02 华为技术有限公司 Method for processing voice data, terminals, server and system
CN103377649A (en) * 2012-04-20 2013-10-30 上海渐华科技发展有限公司 Method for achieving network karaoke antiphonal singing in real time
JP2014199373A (en) * 2013-03-30 2014-10-23 株式会社第一興商 Performance start synchronization system for network chorus
CN104869427A (en) * 2014-02-24 2015-08-26 唐大为 Method, device and system enabling multiple users to sing same song simultaneously online
CN105006234A (en) * 2015-05-27 2015-10-28 腾讯科技(深圳)有限公司 Karaoke processing method and apparatus
CN105023559A (en) * 2015-05-27 2015-11-04 腾讯科技(深圳)有限公司 Karaoke processing method and system
CN105208039A (en) * 2015-10-10 2015-12-30 广州华多网络科技有限公司 Chorusing method and system for online vocal concert
JP2016191731A (en) * 2015-03-30 2016-11-10 株式会社コスミックメディア Multi-point singing method, and multi-point singing system

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1681270A (en) * 2004-04-08 2005-10-12 归海信息技术集成(上海)有限公司 System and method for realizing antiphonal singing and KTV of karaoke by Internet
CN1735028A (en) * 2004-08-31 2006-02-15 张旺 Method and device for realizing real-time Kala OK singing based on network musical hall
JP2007121913A (en) * 2005-10-31 2007-05-17 Brother Ind Ltd Karaoke system
CN101282257A (en) * 2007-04-05 2008-10-08 丰行互动科技股份有限公司 Method for implementing real time multi-human carol image and sound system using network
CN102456340A (en) * 2010-10-19 2012-05-16 盛大计算机(上海)有限公司 Karaoke in-pair singing method based on internet and system thereof
CN103377649A (en) * 2012-04-20 2013-10-30 上海渐华科技发展有限公司 Method for achieving network karaoke antiphonal singing in real time
CN103021401A (en) * 2012-12-17 2013-04-03 上海音乐学院 Internet-based multi-people asynchronous chorus mixed sound synthesizing method and synthesizing system
CN103137124A (en) * 2013-02-04 2013-06-05 武汉今视道电子信息科技有限公司 Voice synthesis method
JP2014199373A (en) * 2013-03-30 2014-10-23 株式会社第一興商 Performance start synchronization system for network chorus
CN103228065A (en) * 2013-04-09 2013-07-31 天脉聚源(北京)传媒科技有限公司 Mobile equipment based on Wi-Fi, and method and system of mobile equipment for networking
CN103295568A (en) * 2013-05-30 2013-09-11 北京小米科技有限责任公司 Asynchronous chorusing method and asynchronous chorusing device
CN103310822A (en) * 2013-06-08 2013-09-18 泉州天籁时空文化传播有限公司 Method for practicing choral songs anytime and anywhere
CN103337240A (en) * 2013-06-24 2013-10-02 华为技术有限公司 Method for processing voice data, terminals, server and system
CN104869427A (en) * 2014-02-24 2015-08-26 唐大为 Method, device and system enabling multiple users to sing same song simultaneously online
JP2016191731A (en) * 2015-03-30 2016-11-10 株式会社コスミックメディア Multi-point singing method, and multi-point singing system
CN105006234A (en) * 2015-05-27 2015-10-28 腾讯科技(深圳)有限公司 Karaoke processing method and apparatus
CN105023559A (en) * 2015-05-27 2015-11-04 腾讯科技(深圳)有限公司 Karaoke processing method and system
CN105208039A (en) * 2015-10-10 2015-12-30 广州华多网络科技有限公司 Chorusing method and system for online vocal concert

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665703A (en) * 2017-09-11 2018-02-06 上海与德科技有限公司 The audio synthetic method and system and remote server of a kind of multi-user
CN112567758A (en) * 2018-06-15 2021-03-26 思妙公司 Audio-visual live streaming system and method with latency management and social media type user interface mechanism
CN110149528A (en) * 2019-05-21 2019-08-20 北京字节跳动网络技术有限公司 A kind of process method for recording, device, system, electronic equipment and storage medium
CN110149528B (en) * 2019-05-21 2021-11-16 北京字节跳动网络技术有限公司 Process recording method, device, system, electronic equipment and storage medium
CN112312163A (en) * 2020-10-30 2021-02-02 北京字跳网络技术有限公司 Video generation method and device, electronic equipment and storage medium
CN112489610A (en) * 2020-11-10 2021-03-12 北京小唱科技有限公司 Intelligent chorus method and device
CN112489610B (en) * 2020-11-10 2024-02-23 北京小唱科技有限公司 Intelligent chorus method and device

Similar Documents

Publication Publication Date Title
CN106601220A (en) Method and device for recording antiphonal singing of multiple persons
CN105810211B (en) A kind of processing method and terminal of audio data
CN106653037B (en) Audio data processing method and device
CN108597494A (en) Tone testing method and device
CN108269578B (en) Method and apparatus for handling information
CN103597543A (en) Semantic audio track mixer
CN102037486A (en) System for learning and mixing music
CN111354332A (en) Singing voice synthesis method and device
US7424333B2 (en) Audio fidelity meter
CN104123115A (en) Audio information processing method and electronic device
Fraj et al. Development and perceptual assessment of a synthesizer of disordered voices
CN107170456A (en) Method of speech processing and device
US20070044643A1 (en) Method and Apparatus for Automating the Mixing of Multi-Track Digital Audio
CN106611603A (en) Audio processing method and audio processing device
CN107770628A (en) One kind Karaoke realization method and system, intelligent household terminal
Müller et al. Interactive fundamental frequency estimation with applications to ethnomusicological research
TW202036534A (en) Speech synthesis method, device, and equipment
CN108140402A (en) The dynamic modification of audio content
CN105702249A (en) A method and apparatus for automatic selection of accompaniment
CN109584859A (en) Phoneme synthesizing method and device
US20060120225A1 (en) Apparatus and method for synchronizing audio with video
EP2660815A1 (en) Methods and apparatus for audio processing
CN109243472A (en) A kind of audio-frequency processing method and audio processing system
McKinnon-Bassett et al. Experimental comparison of two versions of a technical ear training program: Transfer of training on tone color identification to a dissimilarity-rating task
CN113963674A (en) Work generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170426

WD01 Invention patent application deemed withdrawn after publication