WO1998055991A1 - Method and apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties - Google Patents

Method and apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties Download PDF

Info

Publication number
WO1998055991A1
WO1998055991A1 PCT/GB1998/001463 GB9801463W WO9855991A1 WO 1998055991 A1 WO1998055991 A1 WO 1998055991A1 GB 9801463 W GB9801463 W GB 9801463W WO 9855991 A1 WO9855991 A1 WO 9855991A1
Authority
WO
WIPO (PCT)
Prior art keywords
vocal
performance
work
attributes
sample
Prior art date
Application number
PCT/GB1998/001463
Other languages
French (fr)
Inventor
Ken Lomax
Original Assignee
Isis Innovation Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isis Innovation Limited filed Critical Isis Innovation Limited
Priority to JP50181399A priority Critical patent/JP2002502510A/en
Priority to EP98922926A priority patent/EP0986807A1/en
Publication of WO1998055991A1 publication Critical patent/WO1998055991A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/366Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems with means for modifying or correcting the external signal, e.g. pitch correction, reverberation, changing a singer's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • G10L2021/0135Voice conversion or morphing

Definitions

  • the present invention relates to a method and apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties.
  • the present invention is particularly suited for use as an alternative to a conventional karaoke machine.
  • the present invention is referred to herein as a voice morpher.
  • Karaoke machines have become increasingly popular both in Japan where they originated and elsewhere.
  • a karaoke machine enables a person, hereafter referred to as the user, to sing along to a backing track of any familiar song, hereafter referred to as the current song.
  • the user sings into a microphone and his/her voice is mixed with the accompaniment before being played out through an amplifier and speakers.
  • the output from the speakers is thus a combination of the pre-recorded accompaniment track and the user's voice.
  • the performance of an original artist singing the current song is analysed and encoded.
  • vocal attributes certain properties of the user's voice, hereafter referred to as vocal attributes, control the manner in which the artist's performance is reproduced in real time.
  • the present invention provides apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties comprising template means having an encoding of an artist's performance of a work; vocal attribute means for determining the vocal attributes of a separate performance of the work; a locator for performing temporal mapping between the encoding of the artist's performance and the vocal attributes; and a synthesis device for combining data from the encoding of the artist's performance with one or more of the vocal attributes to produce a hybrid performance of the work.
  • the apparatus is arranged to reproduce the recorded voice with alternative performance attributes and temporal properties substantially in real time. That is to say, the vocal attribute means, the locator and the synthesis device are adapted to generate their respective outputs in a time period which is sufficiently short to be unnoticeable.
  • the present invention provides a vocal encoder for generating an encoding of an artist's performance of a work suitable for use with the above mentioned template means, the vocal encoder comprises sampling means for dividing the artist's vocal performance of the work into a plurality of samples, each of the samples partially overlapping at least one other sample; and an analyser for separately extracting data representative of the vocal performance from each of the plurality of samples and for generating a vox encoding consisting of the extracted data identified with respect to the location of the respective sample in the work.
  • the type of data extracted by the analyser is chosen to enable the substitution or the combining of alternative vocal characteristics so that the data can be used to generate a hybrid performance of the work.
  • the artist's vocal performance is encoded to separately include data on voiced and unvoiced components, the fundamental frequency and associated harmonics of the voiced components, the spectral tilt and the amplitude.
  • the template means includes the vocal encoder for encoding the artist's performance of the work.
  • the vocal attribute means may provide data on the presence of voiced or unvoiced components, the fundamental frequency of the voiced components, the spectral tilt and the amplitude. Both the encoding of the artist's performance and the vocal attributes may additionally include separate cue data enabling temporal mapping.
  • the apparatus further includes accompaniment means for storing and reproducing an accompaniment to the work.
  • the synthesis device preferably combines at least one of the following vocal attributes: fundamental frequency, spectral tilt and amplitude with data from the vox encoding of the artist's performance.
  • the present invention provides in a further aspect a method of reproducing a recorded voice with alternative performance attributes and temporal properties comprising determining vocal attributes of a performance of a work; performing temporal mapping between an encoded artist's performance of the work and the vocal attributes; and combining data from the encoding of the artist's performance with one or more of the vocal attributes to produce a hybrid performance of the work.
  • a feature of this invention is the potential for temporal modification of the encoded vocal performance.
  • This may have at least two other applications in addition to the karaoke application described below.
  • the application could be used to improve synchronisation between the artist's pre-recorded voice and lip movements.
  • the application may be used in the film industry to synchronise voice-overs more precisely with the lip movements of an actor.
  • reference is made generally herein to singing the apparatus and method may be employed for other types of vocal works such as speeches, readings and recitals.
  • FIG. 1 is a schematic diagram of a voice morpher in accordance with the present invention.
  • the voice morpher will be described with reference to its application as an improved karaoke machine.
  • the voice morpher comprises four main components; accompaniment means 14 for storing and re-synthesising musical accompaniments, template means 17 for encoding and storing an artist's vocal rendition of one or more songs, a microphone tracker 10 and a voice synthesiser 20, each of which will be described in detail.
  • the accompaniment means 14 for storing and re-synthesising musical accompaniments consists of a MIDI sequencer 15 and a MIDI synthesiser 16.
  • the MIDI sequencer 15 holds data defining accompaniments to one or more songs.
  • the MIDI sequencer 15 transmits its sequence of MIDI commands to the MIDI synthesiser 16 under guidance of a timer 22, thus causing the accompaniment for the current song to be played through one or more speakers 24.
  • the user sings along to the accompaniment into a microphone 1 1 .
  • the signal from the microphone is sampled into an input buffer 12 of length typically 500 samples at a sampling rate typically of 16 kHz and a resolution of 16 bits. The precise sampling rate, sampling resolution and length of the input buffer may vary between applications.
  • the input buffer 12 is full, its data, hereafter termed the input signal, is supplied to a first vocal analyser 13, preferably for immediate analysis. Meanwhile the input buffer 12 is cleared and begins once again to fill with data sampled from the microphone 1 1.
  • the vocal analyser 13 performs analysis of the input signal to determine the vocal attributes of the input signal.
  • the vocal attributes may include amplitude, voicing characteristics, fundamental frequency and spectral tilt. Reference to spectral tilt is intended as reference to the variation in the intensity of harmonics for a given fundamental frequency. This vocal attribute is characteristic of the strength with which a note is sung. Other vocal attributes may additionally be analysed where necessary.
  • the amplitude is determined by the maximum sample in the input signal.
  • the voicing characteristic may be one of three alternatives; silent, voiced or unvoiced and can be determined using one of several known voicing analysis procedures. Preferably, the voicing characteristic is determined using a zero-crossing and amplitude analysis.
  • This analysis involves the following steps: the maximum sample in the input signal is located, if the magnitude of the maximum sample does not surpass a preset silence threshold, the voicing characteristic is deemed silent; if the silence threshold is surpassed, the number of zero-crossings in the input signal is determined and if the number of zero-crossings surpasses a zero-crossing threshold the input signal is deemed unvoiced, otherwise voiced.
  • the fundamental frequency is derived. The fundamental frequency may be determined using one of several known fundamental frequency estimating procedures, the most common of which is cepstral analysis.
  • Cepstral analysis yields an approximation of the fundamental frequency which may be used to identify the precise location of the fundamental peak in the Fourier transform of the input signal, by means of, for example, a parabolic interpolation. This enables an accurate estimation of the fundamental frequency.
  • the spectral tilt is defined simply as the slope of the best straight-line fit to the Fourier transform of the input signal.
  • Additional parameters which may be described by the vocal analyser include a description of the user's lip movements by means of a visual analyser (Not shown). Other routines may be used to determine the vocal attributes to those specified above, if appropriate.
  • the analysis procedures described above which are used to determine the vocal attributes can operate at a high speed and can take typically 0.01 seconds to produce an output
  • the vocal attributes output from the microphone tracker 10 which describe the user's voice can be generated substantially in real time (that is, sufficiently fast that any delay is unnoticeable).
  • the analyser 13 may perform a more detailed analysis than that described above. Also, data on additional vocal attributes may be generated.
  • Template means for encoding and storing an artist's vocal rendition of the current song comprises a second vocal analyser 18 and means for storing the resulting encoding 19.
  • a complete vocal track of an artist singing the current song is sampled by the second vocal analyser 18 at a sampling rate typically of 44 kHz and a sampling resolution typically of 16 bits producing a waveform. The precise sampling rate and resolution may vary between applications.
  • a sequence of analysis windows of duration typically 0.05 seconds and centred typically 0.01 seconds apart are applied to the waveform producing a sequence of short-time analysis waveforms. Thus, each of the analysis windows overlaps with neighbouring analysis windows.
  • Each short-time analysis waveform is then analysed using established procedures to determine its properties.
  • the analysis can provide, amongst others, descriptions of the voiced component, the unvoiced component, the fundamental frequency, voicing characteristics, amplitude and lip position.
  • the resulting data for each short-time analysis waveform is then stored as a respective analysis frame.
  • the result is a sequence of analysis frames describing the changing voiced component, unvoiced component, voicing characteristic, amplitude and lip positions over the duration of the entire song.
  • the sequence of analysis frames is referred to hereafter as a vox track and is stored in the memory 19.
  • the analysis techniques employed in generating the vox track preferably includes the same techniques as those employed by the microphone tracker albeit at a much higher sampling rate.
  • the vox track includes additional data on the voiced and unvoiced components which is sufficient to enable the artist's performance to be reproduced.
  • the envelope function of the sample waveform may be determined by interpolating between local maxima identified along the frequency axis of the Fourier transform.
  • the template means 17 may only consist of the memory
  • the memory 19 in which is stored a plurality of encodings of different songs.
  • the memory 19 may employ conventional means for data storage such as laser discs.
  • a single analyser may be employed to function as both the analyser for the microphone tracker 10 and the template means 17.
  • the template data may be compressed to maximise the amount of data stored. For example, variable resolution of the data analysis may be employed so that a continuous sound such as silence or a voiced component lasting up to 0.5 seconds could be recorded as a single data entry. Where compression of this nature is employed the need for accurate data on the location of the template data within the song is essential.
  • the voice synthesiser 20 The voice synthesiser 20
  • the voice synthesiser 20 oversees the interaction between the microphone tracker 10, the accompaniment means 14, the stored encoding of the vox track 19 and a locator 21.
  • the voice synthesiser 20 Upon request by the user the voice synthesiser 20 resets and starts the timer 22. This causes the MIDI sequencer 15 to pass its encoded MIDI signals to the MIDI synthesiser 16 at a rate determined by the timer 22.
  • the output of the MIDI synthesiser is sent to the speakers 24 causing the accompaniment to the current song to be played.
  • the user sings the current song into the microphone 11.
  • the output from the microphone tracker 10 in the form of vocal attributes describing the user's voice which are generated at regular intervals of typically 0.02 seconds, i.e. substantially in real time, are input into the locator 21 of the voice synthesiser 20.
  • the locator 21 is connected to two output buffers 25, 26 via a synthesis device 23.
  • the empty buffer sends a request to the locator 21 for data.
  • the locator 21 queries the timer 22 to determine the current temporal location of the accompaniment and retrieves from the memory 19 the analysis frame within the vox track which lies closest to the current time T, hereafter referred to as the current analysis frame.
  • the locator 21 therefore applies a linear mapping between the temporal location of the accompaniment and the position of the current analysis frame in the vox track.
  • the locator 21 may apply a non-linear mapping between the accompaniment and the vox track. This operates by comparing the vocal attributes input from the microphone tracker 10 and a plurality of neighbouring analysis frames in the vox track about time T. If, for example, the user changes from a voiced to an unvoiced sound earlier than as stored in the vox track, the rate of advance through the vox track may be accelerated.
  • the rate of advance through the vox track may be deccelerated.
  • the rate of advance along the vox track can be controlled by the user's voice.
  • the locator 21 may also use information describing the lip position of the user and those described in the analysis frames of the vox track to improve synchronisation.
  • the current analysis frame selected by the locator 21 is input to the synthesis device 23 which also receives the vocal attributes from the microphone tracker 10. The synthesis device 23 then generates a waveform using a combination of this data.
  • the voiced and unvoiced components of the waveform are shaped by the voiced and unvoiced components received from the selected analysis frames of the vox track and the vocal attributes of the waveform are determined by the vocal attributes received from the microphone tracker 10. This data is interpolated appropriately to ensure a smooth modification of audio properties in the synthesised waveform using known techniques.
  • the waveform generated by the synthesis device 23 is thus a hybrid of the encoded artist's performance and the user's vocal attributes.
  • the lip position specified in the current analysis frame input from the locator 21 can be used to provide a graphical illustration of the lip movements in a video display 27.
  • the audio synthesis routines described are established techniques based upon spectral modelling with additive synthesis, although alternative audio synthesis procedures may be used.
  • the waveform generated by the synthesis device 23 is sent to the currently empty output buffer 25 or 26 which is then queued for output to the speakers 24.
  • the length of the output buffers is typically 500 samples though this may vary between applications.
  • Once the output buffer is played it will again send a request to the locator 21 which will in turn cause more data to be synthesised and sent to it. Meanwhile the other output buffer will be playing the data it has received.
  • the waveforms from the synthesis device 23 are supplied to the buffers 25, 26 alternately and are fed from the buffers 25, 26 to the speakers 24 alternately. This process repeats until the entire accompaniment has been played.
  • the sound emerging from the speakers consists of the accompaniment and a hybrid performance of the current song which retains the vocal timbre of the artist but incorporates the vocal attributes of the user and the user's temporal progression through the song.
  • the waveforms generated by the synthesis device may be stored in a memory or recorded.
  • the voice morpher may be used with a prerecording of the singer's own voice so that in circumstances where a performer wishes to mime to their own music this can be done without the attendant difficulties with lip synch.
  • the voice morpher also has use in the film industry. In many films speech is separately dubbed after filming. This requires the actors to carefully follow filmed lip movements whilst still ensuring all the necessary expression and vocal dynamics are produced. Using the voice morpher, a poor quality vocal recording is taken at the time of filming and a good quality vocal recording separately produced later which closely follows the film but without the demand for exact synchronisation.
  • the later produced good quality recording is then encoded as the desired voice and the poor quality speech recorded during filming is analysed by the microphone tracker.
  • the resultant final recording is a combination of the high quality sound of the later recording with the expression and intensity of the filmed scene.
  • the voice morpher automatically ensures substantially exact synchronisation, the need for exact dubbing in post production is removed. Furthermore, more detailed analysis of both vocal tracks may be performed as the demand for real- time operation is removed.
  • the voice morpher may also be used with other forms of vocal performance works such as speeches, readings and recitals. With works such as these where there is no clear accompaniment, alternative means are employed to identify the exact location within the work. This may be in the form of time cues, supplied to the user preferably in a way which does not interfere with the user's performance of the work.

Abstract

The voice morpher creates a hybrid performance of a work, such as a song, using two separate encodings of the work. One of the encodings is reproduced with alternative performance attributes and temporal properties taken from the other of the two encodings. The voice morpher is particularly suitable for use with a karaoke machine as it enables the singer to actually sound like the original performer.

Description

METHOD AND APPARATUS FOR REPRODUCING A RECORDED VOICE WITH ALTERNATIVE PERFORMANCE ATTRIBUTES AND
TEMPORAL PROPERTIES
The present invention relates to a method and apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties. The present invention is particularly suited for use as an alternative to a conventional karaoke machine. For the sake of simplicity, the present invention is referred to herein as a voice morpher. Karaoke machines have become increasingly popular both in Japan where they originated and elsewhere. A karaoke machine enables a person, hereafter referred to as the user, to sing along to a backing track of any familiar song, hereafter referred to as the current song. The user sings into a microphone and his/her voice is mixed with the accompaniment before being played out through an amplifier and speakers. The output from the speakers is thus a combination of the pre-recorded accompaniment track and the user's voice. The user has the impression therefore of being a singer in a band. Unfortunately, whilst most people sing reasonably well, there are many singers who wish to sound better or more like the original artist who initially sang the current song. The karaoke machine does not accommodate this problem as the user's own voice is reproduced exactly.
Additionally it is becoming increasingly common in certain circumstances for professional singers to mime to an existing recording when performing. This can be disconcerting to an audience when the singer fails to mime accurately. Indeed, extremely small discrepancies between the movements of a singer's mouth and the sound heard are noticeable and distracting to an audience.
With the present invention, the performance of an original artist singing the current song is analysed and encoded. When the user then sings the same song certain properties of the user's voice, hereafter referred to as vocal attributes, control the manner in which the artist's performance is reproduced in real time. By reproducing the artist's voice under guidance of the vocal attributes of the user's voice, the user is given the impression of singing the current song in the voice of the original artist. The present invention provides apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties comprising template means having an encoding of an artist's performance of a work; vocal attribute means for determining the vocal attributes of a separate performance of the work; a locator for performing temporal mapping between the encoding of the artist's performance and the vocal attributes; and a synthesis device for combining data from the encoding of the artist's performance with one or more of the vocal attributes to produce a hybrid performance of the work. Ideally, the apparatus is arranged to reproduce the recorded voice with alternative performance attributes and temporal properties substantially in real time. That is to say, the vocal attribute means, the locator and the synthesis device are adapted to generate their respective outputs in a time period which is sufficiently short to be unnoticeable. Separately, the present invention provides a vocal encoder for generating an encoding of an artist's performance of a work suitable for use with the above mentioned template means, the vocal encoder comprises sampling means for dividing the artist's vocal performance of the work into a plurality of samples, each of the samples partially overlapping at least one other sample; and an analyser for separately extracting data representative of the vocal performance from each of the plurality of samples and for generating a vox encoding consisting of the extracted data identified with respect to the location of the respective sample in the work. The type of data extracted by the analyser is chosen to enable the substitution or the combining of alternative vocal characteristics so that the data can be used to generate a hybrid performance of the work.
Preferably, the artist's vocal performance is encoded to separately include data on voiced and unvoiced components, the fundamental frequency and associated harmonics of the voiced components, the spectral tilt and the amplitude. Moreover, in a preferred embodiment the template means includes the vocal encoder for encoding the artist's performance of the work. The vocal attribute means may provide data on the presence of voiced or unvoiced components, the fundamental frequency of the voiced components, the spectral tilt and the amplitude. Both the encoding of the artist's performance and the vocal attributes may additionally include separate cue data enabling temporal mapping. In a preferred embodiment the apparatus further includes accompaniment means for storing and reproducing an accompaniment to the work. The synthesis device preferably combines at least one of the following vocal attributes: fundamental frequency, spectral tilt and amplitude with data from the vox encoding of the artist's performance. The present invention provides in a further aspect a method of reproducing a recorded voice with alternative performance attributes and temporal properties comprising determining vocal attributes of a performance of a work; performing temporal mapping between an encoded artist's performance of the work and the vocal attributes; and combining data from the encoding of the artist's performance with one or more of the vocal attributes to produce a hybrid performance of the work.
A feature of this invention is the potential for temporal modification of the encoded vocal performance. This may have at least two other applications in addition to the karaoke application described below. For example, in the case of miming sung performances, the application could be used to improve synchronisation between the artist's pre-recorded voice and lip movements. Secondly the application may be used in the film industry to synchronise voice-overs more precisely with the lip movements of an actor. Although reference is made generally herein to singing, the apparatus and method may be employed for other types of vocal works such as speeches, readings and recitals.
An embodiment of the present invention will now be described by way of example with reference to the accompanying drawing, in which Figure 1 is a schematic diagram of a voice morpher in accordance with the present invention.
The voice morpher will be described with reference to its application as an improved karaoke machine. As shown in Figure 1 , the voice morpher comprises four main components; accompaniment means 14 for storing and re-synthesising musical accompaniments, template means 17 for encoding and storing an artist's vocal rendition of one or more songs, a microphone tracker 10 and a voice synthesiser 20, each of which will be described in detail.
Accompaniment means for storing and re-synthesising musical accompaniments
The accompaniment means 14 for storing and re-synthesising musical accompaniments consists of a MIDI sequencer 15 and a MIDI synthesiser 16. The MIDI sequencer 15 holds data defining accompaniments to one or more songs. When a song has been selected, i.e. the current song, by actuation of appropriate control means, the MIDI sequencer 15 transmits its sequence of MIDI commands to the MIDI synthesiser 16 under guidance of a timer 22, thus causing the accompaniment for the current song to be played through one or more speakers 24. The microphone tracker
When the accompaniment is played through the speakers 24, the user sings along to the accompaniment into a microphone 1 1 . The signal from the microphone is sampled into an input buffer 12 of length typically 500 samples at a sampling rate typically of 16 kHz and a resolution of 16 bits. The precise sampling rate, sampling resolution and length of the input buffer may vary between applications. When the input buffer 12 is full, its data, hereafter termed the input signal, is supplied to a first vocal analyser 13, preferably for immediate analysis. Meanwhile the input buffer 12 is cleared and begins once again to fill with data sampled from the microphone 1 1.
The vocal analyser 13 performs analysis of the input signal to determine the vocal attributes of the input signal. The vocal attributes may include amplitude, voicing characteristics, fundamental frequency and spectral tilt. Reference to spectral tilt is intended as reference to the variation in the intensity of harmonics for a given fundamental frequency. This vocal attribute is characteristic of the strength with which a note is sung. Other vocal attributes may additionally be analysed where necessary. Considering the vocal attributes referred to above in turn, the amplitude is determined by the maximum sample in the input signal. The voicing characteristic may be one of three alternatives; silent, voiced or unvoiced and can be determined using one of several known voicing analysis procedures. Preferably, the voicing characteristic is determined using a zero-crossing and amplitude analysis. This analysis involves the following steps: the maximum sample in the input signal is located, if the magnitude of the maximum sample does not surpass a preset silence threshold, the voicing characteristic is deemed silent; if the silence threshold is surpassed, the number of zero-crossings in the input signal is determined and if the number of zero-crossings surpasses a zero-crossing threshold the input signal is deemed unvoiced, otherwise voiced. When the voicing characteristic of the input signal is deemed to be voiced, the fundamental frequency is derived. The fundamental frequency may be determined using one of several known fundamental frequency estimating procedures, the most common of which is cepstral analysis. Cepstral analysis yields an approximation of the fundamental frequency which may be used to identify the precise location of the fundamental peak in the Fourier transform of the input signal, by means of, for example, a parabolic interpolation. This enables an accurate estimation of the fundamental frequency. The spectral tilt is defined simply as the slope of the best straight-line fit to the Fourier transform of the input signal.
Additional parameters which may be described by the vocal analyser include a description of the user's lip movements by means of a visual analyser (Not shown). Other routines may be used to determine the vocal attributes to those specified above, if appropriate. The analysis procedures described above which are used to determine the vocal attributes can operate at a high speed and can take typically 0.01 seconds to produce an output Thus, the vocal attributes output from the microphone tracker 10 which describe the user's voice can be generated substantially in real time (that is, sufficiently fast that any delay is unnoticeable).
In alternative applications of the voice morpher this speed of analysis may not be necessary. Where the analysis need not be performed in real time, the analyser 13 may perform a more detailed analysis than that described above. Also, data on additional vocal attributes may be generated.
Template means for encoding and storing an artist's vocal rendition of the current song The template means 17 comprises a second vocal analyser 18 and means for storing the resulting encoding 19. A complete vocal track of an artist singing the current song is sampled by the second vocal analyser 18 at a sampling rate typically of 44 kHz and a sampling resolution typically of 16 bits producing a waveform. The precise sampling rate and resolution may vary between applications. When the vocal track is sampled a sequence of analysis windows of duration typically 0.05 seconds and centred typically 0.01 seconds apart are applied to the waveform producing a sequence of short-time analysis waveforms. Thus, each of the analysis windows overlaps with neighbouring analysis windows.
Each short-time analysis waveform is then analysed using established procedures to determine its properties. The analysis can provide, amongst others, descriptions of the voiced component, the unvoiced component, the fundamental frequency, voicing characteristics, amplitude and lip position. The resulting data for each short-time analysis waveform is then stored as a respective analysis frame. The result is a sequence of analysis frames describing the changing voiced component, unvoiced component, voicing characteristic, amplitude and lip positions over the duration of the entire song. The sequence of analysis frames is referred to hereafter as a vox track and is stored in the memory 19. The analysis techniques employed in generating the vox track preferably includes the same techniques as those employed by the microphone tracker albeit at a much higher sampling rate. In this way the substitution or the combining of data from the vox track and the microphone tracker is simplified. However, the vox track includes additional data on the voiced and unvoiced components which is sufficient to enable the artist's performance to be reproduced. For example, for an unvoiced component the envelope function of the sample waveform may be determined by interpolating between local maxima identified along the frequency axis of the Fourier transform. Although the template means 17 is described with both the analyser
18 and the memory 19, where the voice morpher is to be used as a type of karaoke machine the template means 17 may only consist of the memory
19 in which is stored a plurality of encodings of different songs. The memory 19 may employ conventional means for data storage such as laser discs.
In a further alternative, a single analyser may be employed to function as both the analyser for the microphone tracker 10 and the template means 17. The template data may be compressed to maximise the amount of data stored. For example, variable resolution of the data analysis may be employed so that a continuous sound such as silence or a voiced component lasting up to 0.5 seconds could be recorded as a single data entry. Where compression of this nature is employed the need for accurate data on the location of the template data within the song is essential.
The voice synthesiser 20
The voice synthesiser 20 oversees the interaction between the microphone tracker 10, the accompaniment means 14, the stored encoding of the vox track 19 and a locator 21. Upon request by the user the voice synthesiser 20 resets and starts the timer 22. This causes the MIDI sequencer 15 to pass its encoded MIDI signals to the MIDI synthesiser 16 at a rate determined by the timer 22. The output of the MIDI synthesiser is sent to the speakers 24 causing the accompaniment to the current song to be played. Upon hearing the accompaniment, at the appropriate time, the user sings the current song into the microphone 11. The output from the microphone tracker 10 in the form of vocal attributes describing the user's voice which are generated at regular intervals of typically 0.02 seconds, i.e. substantially in real time, are input into the locator 21 of the voice synthesiser 20.
The locator 21 is connected to two output buffers 25, 26 via a synthesis device 23. When either output buffer 25 or 26 is empty (initially both of them), the empty buffer sends a request to the locator 21 for data. In its simplest form the locator 21 queries the timer 22 to determine the current temporal location of the accompaniment and retrieves from the memory 19 the analysis frame within the vox track which lies closest to the current time T, hereafter referred to as the current analysis frame. The locator 21 therefore applies a linear mapping between the temporal location of the accompaniment and the position of the current analysis frame in the vox track.
It is possible, however, that the user may sing words in the song at a slightly different rate to the artist. For example, a vowel may be prolonged or a consonant reached early. To cater for this effect and so improve synchronisation between the user's voice and the artist's vox track, the locator 21 may apply a non-linear mapping between the accompaniment and the vox track. This operates by comparing the vocal attributes input from the microphone tracker 10 and a plurality of neighbouring analysis frames in the vox track about time T. If, for example, the user changes from a voiced to an unvoiced sound earlier than as stored in the vox track, the rate of advance through the vox track may be accelerated. Similarly, if for example the user changes from a voiced to an unvoiced sound later than as forecast by the vox track, the rate of advance through the vox track may be deccelerated. Thus, the rate of advance along the vox track can be controlled by the user's voice. The locator 21 may also use information describing the lip position of the user and those described in the analysis frames of the vox track to improve synchronisation. The current analysis frame selected by the locator 21 is input to the synthesis device 23 which also receives the vocal attributes from the microphone tracker 10. The synthesis device 23 then generates a waveform using a combination of this data. The voiced and unvoiced components of the waveform are shaped by the voiced and unvoiced components received from the selected analysis frames of the vox track and the vocal attributes of the waveform are determined by the vocal attributes received from the microphone tracker 10. This data is interpolated appropriately to ensure a smooth modification of audio properties in the synthesised waveform using known techniques. The waveform generated by the synthesis device 23 is thus a hybrid of the encoded artist's performance and the user's vocal attributes.
The lip position specified in the current analysis frame input from the locator 21 can be used to provide a graphical illustration of the lip movements in a video display 27.
The audio synthesis routines described are established techniques based upon spectral modelling with additive synthesis, although alternative audio synthesis procedures may be used. The waveform generated by the synthesis device 23 is sent to the currently empty output buffer 25 or 26 which is then queued for output to the speakers 24. The length of the output buffers is typically 500 samples though this may vary between applications. Once the output buffer is played it will again send a request to the locator 21 which will in turn cause more data to be synthesised and sent to it. Meanwhile the other output buffer will be playing the data it has received. In this way the waveforms from the synthesis device 23 are supplied to the buffers 25, 26 alternately and are fed from the buffers 25, 26 to the speakers 24 alternately. This process repeats until the entire accompaniment has been played.
In this way the sound emerging from the speakers consists of the accompaniment and a hybrid performance of the current song which retains the vocal timbre of the artist but incorporates the vocal attributes of the user and the user's temporal progression through the song.
Instead of or in addition to the hybrid performance being supplied to speakers, the waveforms generated by the synthesis device may be stored in a memory or recorded.
In addition to the use of the voice morpher to enable a singer to sound like someone else, the voice morpher may be used with a prerecording of the singer's own voice so that in circumstances where a performer wishes to mime to their own music this can be done without the attendant difficulties with lip synch. The voice morpher also has use in the film industry. In many films speech is separately dubbed after filming. This requires the actors to carefully follow filmed lip movements whilst still ensuring all the necessary expression and vocal dynamics are produced. Using the voice morpher, a poor quality vocal recording is taken at the time of filming and a good quality vocal recording separately produced later which closely follows the film but without the demand for exact synchronisation. The later produced good quality recording is then encoded as the desired voice and the poor quality speech recorded during filming is analysed by the microphone tracker. The resultant final recording is a combination of the high quality sound of the later recording with the expression and intensity of the filmed scene. As the voice morpher automatically ensures substantially exact synchronisation, the need for exact dubbing in post production is removed. Furthermore, more detailed analysis of both vocal tracks may be performed as the demand for real- time operation is removed.
The voice morpher may also be used with other forms of vocal performance works such as speeches, readings and recitals. With works such as these where there is no clear accompaniment, alternative means are employed to identify the exact location within the work. This may be in the form of time cues, supplied to the user preferably in a way which does not interfere with the user's performance of the work.

Claims

1. Apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties comprising template means having an encoding of an artist's performance of a work; vocal attribute means for determining the vocal attributes of a separate performance of the work; a locator for performing temporal mapping between the encoding of the artist's performance and the vocal attributes; and a synthesis device for combining data from the encoding of the artist's performance with one or more of the vocal attributes to produce a hybrid performance of the work.
2. Apparatus as claimed in claim 1 , further including accompaniment means for storing and reproducing an accompaniment to the work.
3. Apparatus as claimed in claim 2, wherein the locator is in communication with the accompaniment means whereby the encoding of the artist's performance is mapped to the vocal attributes with reference to the location of the accompaniment in the work.
4. Apparatus as claimed in any one of claims 1 to 3, wherein the vocal attribute means includes a microphone, an input buffer connected to the output of the microphone and a vocal analyser connected to the input buffer for identification of vocal attributes of the sound signals sequentially received into the input buffer.
5. Apparatus as claimed in claim 4, wherein the vocal analyser includes voicing classification means for classifying the voicing characteristic of each output from the buffer as being either voiced, unvoiced or silent.
6. A method of reproducing a recorded voice with alternative performance attributes and temporal properties comprising: determining vocal attributes of a performance of a work; performing temporal mapping between an encoded artist's performance of the work and the vocal attributes; and combining data from the encoding of the artist's performance with one or more of the vocal attributes to produce a hybrid performance of the work.
7. A method as claimed in claim 6, wherein the steps of determining vocal attributes of a performance, performing temporal mapping between an encoded performance and the vocal attributes and combining data from the encoded performance with one or more of the vocal attributes are performed substantially in real time.
8. A method as claimed in claim 7, wherein an accompaniment is provided to cue the performance of the work.
9. A method as claimed in claim 8, wherein the encoded performance includes cue data and the encoded performance and the vocal attributes are mapped by matching the accompaniment and the cue data.
10. A vocal encoding method for generating an encoding of an artist's performance of a work, the method comprising sampling the artist's vocal performance of the work to create a plurality of samples, with each sample partially overlapping at least one other sample; analysing each sample to extract data representative of the vocal performance from each of the plurality of samples; and generating a vox encoding consisting of the extracted data identified with respect to the location of the respective sample in the work.
1 1. A vocal encoding method as claimed in claim 10, wherein each sample is analysed to determine the maximum amplitude of the sample.
12. A vocal encoding method as claimed in either of claims 0 or 1 1 , wherein each sample is analysed to classify the voicing characteristic of the sample as being either voiced, unvoiced or silent.
13. A vocal encoding method as claimed in claim 12, wherein classification of the voicing characteristic is performed using at least one of zero- crossing and amplitude analysis.
14. A vocal encoding method as claimed in either of claims 12 or 13, wherein each sample classified as having a voiced characteristic is analysed to determine the fundamental frequency of the sample.
15. A vocal encoding method as claimed in claim 14, wherein the fundamental frequency of each sample is determined using cepstral analysis.
16. A vocal encoding method as claimed in claim 15, wherein each sample classified as having a voiced characteristic is additionally analysed to determine the spectral tilt of each of the samples by identification of the best straight-line fit to the Fourier transform of each sample.
17. A vocal encoder for generating an encoding of an artist's performance of a work, the vocal encoder comprising sampling means for dividing the artist's vocal performance of the work into a plurality of samples, each of the samples partially overlapping at least one other sample; and an analyser for separately extracting data representative of the vocal performance from each of the plurality of samples and for generating a vox encoding consisting of the extracted data identified with respect to the location of the respective sample in the work.
PCT/GB1998/001463 1997-06-02 1998-05-21 Method and apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties WO1998055991A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP50181399A JP2002502510A (en) 1997-06-02 1998-05-21 Method and apparatus for playing recorded audio with alternative performance attributes and temporal characteristics
EP98922926A EP0986807A1 (en) 1997-06-02 1998-05-21 Method and apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9711339A GB9711339D0 (en) 1997-06-02 1997-06-02 Method and apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties
GB9711339.3 1997-06-02

Publications (1)

Publication Number Publication Date
WO1998055991A1 true WO1998055991A1 (en) 1998-12-10

Family

ID=10813426

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1998/001463 WO1998055991A1 (en) 1997-06-02 1998-05-21 Method and apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties

Country Status (4)

Country Link
EP (1) EP0986807A1 (en)
JP (1) JP2002502510A (en)
GB (1) GB9711339D0 (en)
WO (1) WO1998055991A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006079813A1 (en) * 2005-01-27 2006-08-03 Synchro Arts Limited Methods and apparatus for use in sound modification
US7825321B2 (en) 2005-01-27 2010-11-02 Synchro Arts Limited Methods and apparatus for use in sound modification comparing time alignment data from sampled audio signals

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6863378B2 (en) 1998-10-16 2005-03-08 Silverbrook Research Pty Ltd Inkjet printer having enclosed actuators
CN110189741A (en) * 2018-07-05 2019-08-30 腾讯数码(天津)有限公司 Audio synthetic method, device, storage medium and computer equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993018505A1 (en) * 1992-03-02 1993-09-16 The Walt Disney Company Voice transformation system
US5307442A (en) * 1990-10-22 1994-04-26 Atr Interpreting Telephony Research Laboratories Method and apparatus for speaker individuality conversion
US5621182A (en) * 1995-03-23 1997-04-15 Yamaha Corporation Karaoke apparatus converting singing voice into model voice

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5307442A (en) * 1990-10-22 1994-04-26 Atr Interpreting Telephony Research Laboratories Method and apparatus for speaker individuality conversion
WO1993018505A1 (en) * 1992-03-02 1993-09-16 The Walt Disney Company Voice transformation system
US5621182A (en) * 1995-03-23 1997-04-15 Yamaha Corporation Karaoke apparatus converting singing voice into model voice

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006079813A1 (en) * 2005-01-27 2006-08-03 Synchro Arts Limited Methods and apparatus for use in sound modification
US7825321B2 (en) 2005-01-27 2010-11-02 Synchro Arts Limited Methods and apparatus for use in sound modification comparing time alignment data from sampled audio signals

Also Published As

Publication number Publication date
EP0986807A1 (en) 2000-03-22
JP2002502510A (en) 2002-01-22
GB9711339D0 (en) 1997-07-30

Similar Documents

Publication Publication Date Title
US7825321B2 (en) Methods and apparatus for use in sound modification comparing time alignment data from sampled audio signals
EP1849154B1 (en) Methods and apparatus for use in sound modification
US10008193B1 (en) Method and system for speech-to-singing voice conversion
JP5759022B2 (en) Semantic audio track mixer
US9847078B2 (en) Music performance system and method thereof
US5889223A (en) Karaoke apparatus converting gender of singing voice to match octave of song
JP3333022B2 (en) Singing voice synthesizer
US7613612B2 (en) Voice synthesizer of multi sounds
Macon et al. A singing voice synthesis system based on sinusoidal modeling
JPH09198091A (en) Formant converting device and karaoke device
US9892758B2 (en) Audio information processing
ES2356476T3 (en) PROCEDURE AND APPLIANCE FOR USE IN SOUND MODIFICATION.
JP7355165B2 (en) Music playback system, control method and program for music playback system
US6629067B1 (en) Range control system
JPH11184497A (en) Voice analyzing method, voice synthesizing method, and medium
EP0986807A1 (en) Method and apparatus for reproducing a recorded voice with alternative performance attributes and temporal properties
Villavicencio et al. Efficient pitch estimation on natural opera-singing by a spectral correlation based strategy
JPH11259066A (en) Musical acoustic signal separation method, device therefor and program recording medium therefor
JP4757971B2 (en) Harmony sound adding device
JP4430174B2 (en) Voice conversion device and voice conversion method
JP2014164131A (en) Acoustic synthesizer
JP2009244790A (en) Karaoke system with singing teaching function
Wada et al. AN ADAPTIVE KARAOKE SYSTEM THAT PLAYS ACCOMPANIMENT PARTS OF MUSIC AUDIO SIGNALS SYNCHRONOUSLY WITH USERS’SINGING VOICES
JPH0351899A (en) Device for 'karaoke' (orchestration without lyrics)
Hatch High-level audio morphing strategies

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref country code: JP

Ref document number: 1999 501813

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1998922926

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1998922926

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 09445070

Country of ref document: US

WWW Wipo information: withdrawn in national office

Ref document number: 1998922926

Country of ref document: EP