EP1252620A1 - Systeme et procede pour creer des modeles de voix humaines determinees - Google Patents
Systeme et procede pour creer des modeles de voix humaines determineesInfo
- Publication number
- EP1252620A1 EP1252620A1 EP00983768A EP00983768A EP1252620A1 EP 1252620 A1 EP1252620 A1 EP 1252620A1 EP 00983768 A EP00983768 A EP 00983768A EP 00983768 A EP00983768 A EP 00983768A EP 1252620 A1 EP1252620 A1 EP 1252620A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- voice
- data
- template
- captured
- specific
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000005516 engineering process Methods 0.000 claims description 52
- 238000004458 analytical method Methods 0.000 claims description 36
- 238000012545 processing Methods 0.000 claims description 9
- 230000010076 replication Effects 0.000 claims description 7
- 238000004891 communication Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 claims description 5
- 238000012546 transfer Methods 0.000 claims description 4
- 238000012512 characterization method Methods 0.000 claims description 3
- 230000035479 physiological effects, processes and functions Effects 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 230000005540 biological transmission Effects 0.000 claims description 2
- 210000000867 larynx Anatomy 0.000 claims description 2
- 230000003362 replicative effect Effects 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 claims 12
- 238000004519 manufacturing process Methods 0.000 claims 5
- 230000000977 initiatory effect Effects 0.000 claims 2
- 241000269627 Amphiuma means Species 0.000 claims 1
- 230000003213 activating effect Effects 0.000 claims 1
- 238000013475 authorization Methods 0.000 claims 1
- 238000013500 data storage Methods 0.000 claims 1
- 238000012800 visualization Methods 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 9
- 230000002452 interceptive effect Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 241000124008 Mammalia Species 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000037361 pathway Effects 0.000 description 4
- 238000010200 validation analysis Methods 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000002068 genetic effect Effects 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 241000894007 species Species 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 206010013952 Dysphonia Diseases 0.000 description 1
- 206010016275 Fear Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000003306 harvesting Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 239000002243 precursor Substances 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 230000008929 regeneration Effects 0.000 description 1
- 238000011069 regeneration method Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012358 sourcing Methods 0.000 description 1
- 230000009885 systemic effect Effects 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/033—Voice editing, e.g. manipulating the voice of the synthesiser
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/54—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for retrieval
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
- G10L2021/0135—Voice conversion or morphing
Definitions
- a tape or digital recording device is used to record someone's voice and thereby retain it for future listening and replay as it was recorded originally, or portions of the original recording may be played as desired.
- These devices and methods of voice recording also include a range of artificial voices, created by computers, which may be used for many different functions, including for example telephone automatic assistance and verification, very basic speech between toys or equipment and users, synthesized voices for the film and entertainment industry, and the like.
- these artificial voices are preprogrammed to a narrow set of responses according to a specific input.
- these artificial voice sounds are nevertheless simple compared to the robust voice capabilities of the present invention. Indeed, in certain embodiments of the invention there are elements that are either quite different from such systems or which take the previous technology far beyond that ever contemplated or even suggested by such p ⁇ or discove ⁇ es or innovations.
- Figure 1 is a flow diagram of one embodiment of the system operation of the invention.
- Figure 2 is a schematic diagram of one embodiment of a voice capture subsystem.
- Figure 3 is a schematic diagram of one embodiment of a voice analysis subsystem.
- Figure 4 is a schematic diagram of one embodiment of a voice characterization subsystem.
- Figure 5 is a schematic diagram of one embodiment of a voice template subsystem.
- Figure 6 is a schematic diagram of one embodiment of a voice template signal bundler subsystem.
- Figure 7 is one embodiment of a schematic diagram of the system of the invention used with remote information download and upload options.
- Figure 8 is one embodiment of an exemplary plan view of an embodiment of the invention embodied m a mobile, compact component.
- Figure 9 is an exemplary plan view of an embodiment of the invention used with a visual media source.
- Systems and methods are provided for recording or otherwise capturing an enabling amount of a specific person's voice to form a voice pattern template. That template is then useful as a tool for building new speech sounding like that precise voice, using the template, with the new speech probably never having been actually said or never having been said in the precise context or sentences by the specific human but actually sounding identical in all aspects to that specific human's actual speech.
- the enabling portion is designed to capture the elements of the actual voice necessary to reconstruct the actual voice, however a confidence rating is available to predict the limits of the re- constructed or re-created speech in the event there is not enough enabling speech to start with.
- a new voice or voices may be used with a database of subject matter, histo ⁇ cal data, and adaptive or artificial intelligence modules to enable new discussions with the user just as if the templated voice's originator were present.
- This system and method may be combined with other media, such as a software file, a chip embedded tool, or other forms. Interactive use of this system and method may occur in various manners.
- a unit module itself may comprise the entirety of an embodiment this invention, e.g. a chip or electronic board which is configured to capture and enable use of a voice in the manner disclosed herein.
- the template is useful, for example, as a tool for capturing and creating new dialogs with people whom are no longer immediately available, who may be deceased, or even those who consent to having the voices templated and used in this manner.
- Another example is the application to media, such as film or photos or other depictions of the actual vo ⁇ ce(s) originator to create on-demand virtual dialog with the originator.
- media such as film or photos or other depictions of the actual vo ⁇ ce(s) originator to create on-demand virtual dialog with the originator.
- Various other uses and applications are contemplated withm the scope of the invention.
- Voice is a sound of extraordinary power among mammals.
- the sound of a mother's voice is recognized by and soothes a child even before birth, and the sound of a grandfather's voice calms the fears of even a grown person.
- Other voices may inspire complete strangers or may elicit memo ⁇ es from loved ones of long past events and moments.
- this particularity of one's voice derives from the genetic contribution of the parents resulting in the shape, size and position of the va ⁇ ous human body components that influence the way one sounds when speaking or otherwise communicating with voice or through the mouth and nasal passages.
- One method of synthesizing voices and sounds is referred to as concatenative, and refers to the recordings of wave form data samples or real human speech. The method then breaks down the pre-recorded o ⁇ gmal human speech into segments and generates speech utterances by linking these human speech segments to build syllables, words, or phrases. The size of these segments vanes.
- Another method of human speech synthesis is known as parametric. In this method, mathematical models are used to recreate a desired speech sound. For each desired sound, a mathematical model or function is used to generate that sound. As such, the parametric method is generally without human sound as an element.
- parametric speech synthesizers there are generally a few well-known types.
- articulatory synthesizer which mathemetically models the physical aspects of the human lungs, larynx, and vocal and nasal tracts.
- the other type of paramet ⁇ c speech synthesizer is known as a formant synthesizer, which mathematically models the acoustic aspects of the human vocal tract.
- Other systems include means for recognizing a specific voice, once the using system has been trained m that voice. Examples of this include the va ⁇ ous speech recognition systems useful in the field of capturing spoken language and then translating those sounds into text, such as with systems for dictation and the like.
- Other speech related systems concern the field of biometncs. and use of certain spoken words as secu ⁇ ty codes or ciphers.
- FIG. 1 is a schematic diagram of one embodiment of a system 10 for capturing an enabling portion of a specific voice sufficient for using that portion as a template in further use of the voice characte ⁇ stics.
- System 10 may be part of a handheld device, such as an electronic handheld device, or it may be part of a computing device of the size of a laptop, a notebook, or a desktop, or system 10 may be pan of merely a circuit board within another device, or an electronics component or element designed for temporary or permanent placement m or use with another electronic element, circuit, or system, or system 10 may, in whole or in part, comprise computer readable code or merely a logic or functional circuit in a neural system, or system 10 may be formed as some other device or product such as a distributed network-style system.
- system 10 comp ⁇ ses input or capture means 15 for captu ⁇ ng or receiving a portion of a voice for processing and construction of a voice algo ⁇ thm or template means 19, which may be formed as a stream of data, a data package, a telecommunications signal, software code means for defining and re-generatmg a specific voice, or a plurality of voice characte ⁇ stics organized for application to or template on another organization of sound or noise suitable to arrange the sound or noise as an apparent voice of an o ⁇ gmator's voice.
- Other means of formatting computer readable program code means, or other means, for causing use of certain identified voice characte ⁇ stics data to artificially generate a voice is also contemplated withm this invention.
- the logic or rules of the algo ⁇ thm or template means 19 are preferably formed with a minimum of voice input, however va ⁇ ous amounts of voice and other data may be desired to form an acceptable data set for a particular voice
- an enabling portion of a human voice for example, with a small amount of analog or digital recording, or real-time live input, of the person's voice that is to be templated.
- a presc ⁇ bed grouping of words may be formed to optimize data capture of the most relevant voice characte ⁇ stics of the person to enable accurate replication of the voice.
- Analysis means are contemplated for most efficiently determining what form of enabling portion is best for a particular person Whether by a single data input or a se ⁇ es of inputs, the voice data is captured and stored in at least one portion of storage means 22.
- Analysis of the voice data is performed at processor means 25, to identify characte ⁇ stics useful in creating a template of that specific user's voice. It is recognized that the voice data may be routed directly to the processor means and need not necessa ⁇ ly go initially to the storage means 22. Further exemplary discussion of the interaction among the processor means, storage means, and the template means is found below, and m relation to Figures 2-8 After adequate voice data has been analyzed, then a template of the voice is, m one embodiment, stored until called for by the processor means 25.
- voice AA After voice AA has had an enabling portion captured, analyzed and templated (now referred to as AA t ) it is stored in a storage means 22 (which may be either resident near the other components or located in a remote or distributed mode at one or more locations) until a demand request occurs.
- a demand request is a user of system 10 submitting a request via representative input means 29 to utilize the voice AA template AA t m a newly created conversation with voice AA participating as a generated voice rather than an actual, live use of voice AA. This may occur in conjunction with or utilization of one or more va ⁇ ous databases, a few of which are represented by situational database 33 or personal database 36.
- voice AA template AA is called and provided as a forming mechanism with certain other noise to create a new conversational voice AA 1 that sounds precisely like the o ⁇ ginal voice AA of the o ⁇ gmally inputted data, once formed.
- the new voice AA 1 sounds like o ⁇ gmal voice AA in all respects, it is actually an artificially created voice with the template AA t providing the matching key, such as a genetic code, to voice AA.
- the matching key such as a genetic code
- An additional necessity may be to have means for ve ⁇ fying that voices heard are either real or templated, m order to ensure against fraudulent or unautho ⁇ zed use of such created voices.
- Legal mechanisms may need to be created to recognize this realm of technology, in addition to the licensing, contract, and other mechanisms already m existence in most countries.
- connection means 41 represents pathways for energy or data flow which may be actual leads, light channels, or other electronic, biologic or other activatable paths among system components.
- power means 44 is shown withm system 10, but may also be remote if desired.
- the algo ⁇ thm, signal, code means or template which is created in whole or m part may be returned for storage or refinement withm either storage means 22, template means 19, or other system component or architecture. This capability permits and facilitates improvement or adaptation of the specific voice template according to the instructions of the creator or another user.
- voice BB may be useful to input the one or more similar characte ⁇ stics from voice BB as either limited or general refinement inputs to voice AA 1 or voice template AA 1 ,. It is then possible to also retain voice BB and create a voice BB 1 and voice template BB 1 , either of which may be useful at a future date.
- Another example includes creation of a database of variously refined voices for a single o ⁇ ginator of the voice, useful on demand or as appropriate by system or user, according to the situation that is presented.
- a service may be offered to voice match and provide suitable refinement tools, such as natural or artificially generated waveforms or other acoustic or signal elements, to refine voice templates according to the user's desires.
- the ability to provide machine, component, or computer readable code means as part of the signal forming or transmitting of the voice template process or product further facilitates use of this technology.
- Means to tie or activate use of this voice templatmg and voice generating technology to streaming or other forms of data allows for virtual dialog, which may be adaptive and intelligent, as well as merely informational or reactive, and with such dialog or conversations being with voices selected by the user. It is also recognized that the technology herein disclosed may be utilized with visual images as well as aural sounds.
- a voice template as desc ⁇ bed herein may be created using data that does not include an actual enabling portion of an o ⁇ ginator's voice, but that the enabling portion of the o ⁇ ginator's voice may be used, possibly with other data, to validate the replication accuracy of the originator's voice.
- the enabling portion of the o ⁇ ginator's voice may be used, possibly with other data, to validate the replication accuracy of the originator's voice.
- a templated or replicated voice may be used to interact with or prompt users of computers or other machines and systems. The user may select such templated voice from either her own library of templated voices, another source of templated voices, or she may simply create a new voice.
- templated voice AA 1 may be selected by the user for voicemail prompts or reading of texts, or other communication interface, whereas templated voice CC may be selected for use in relation to an interactive entertainment use. Troubleshooting or problems lurking in the user's machine, or alerting signals to a user of a device, may be identified or resolved by the user while working with templated voice DD. These are simply examples of how this technology will enable improved user interface and association by the user with functions, tasks, modes or other features by use of templated voice technology.
- Template selection and use, and generated voice creation and use may be accomplished either within the user's machine or device, partially withm the user's machine or device, or external of the user's machine or device
- a traveler may wish to carry or access certain voices for accompaniment of the traveler on aircraft, or m hotel rooms.
- the invention may be useful in hospital or hospice rooms, or other locations. These uses are possible with one or more of the embodiments herein. Interestingly, this system may also be used by some individuals on their own voice and given as a legacy to others. Many other uses are withm the scope of the teachings herein.
- a parent desired her child to learn about race relations in the United States in the decade of the 1960s using one of the child's deceased grandparent's voices
- the templated voice of the selected grandparent would be designed, manufactured and designated for use.
- System 10 would access one or more databases to harvest information and knowledge about the designated topic and provide that information to one or more databases withm system 10. such as situational database 33 for use as needed.
- the grandparents' templated voice EE 1 would be used, following access to the desired information, and the demand request would be met by the templated voice EE 1 commencing a discussion on the designated topic when desired.
- Such discussion can be saved for later use within system 10 or at a remote location as desired, or the discussion may be interactive between the "grandparent" i.e. the templated voice, and the child.
- This feature is possible by use of a voice recognition module to know in advance of the discussion the identity of the child's voice and to include adequate vocabulary and neural cognition of the va ⁇ ous question combinations likely from the child.
- a b ⁇ dge would be provided from the input and voice recognition module to the templated voice portion of the system, to enable responsiveness by the templated voice.
- Va ⁇ ous speech recognition tools are conceivable for use this manner, when so configured according to the novel uses desc ⁇ bed herein. Of course this configuration also requires means to rapidly search for the answer to the question and to formulate a response approp ⁇ ate to the listening child.
- this example illustrates the extraordinary potential of this technology, particularly when combined with suitable data, system power, and system speed.
- the optional voice recognition module it is possible to utilize only limited features to enable a listener of a templated voice to direct the generated voice to cease or continue, or to enable certain other features with certain commands. This would be a form of limited interactive mode approp ⁇ ate for some but not all types of use. Even if the user chose not to use the optional features and instead merely arranged for a story or a discussion in the absent grandparents' voice, the effect and utility of this is enormous to this or other types of uses.
- the templated voice may again be that of the grandparent selected above (templated voice EE 1 ), and the filter of DATA DATES is used with a selected date of "BEFORE DECEMBER 1963" for a discussion of race relations in the United States in the decade of the 1960s. The result would be a discussion that would not include any information that occu ⁇ ed after the designated date. In this example, the "grandparent" could not discuss the Voting Rights Act of 1965 or the urban riots of the late 1960s m that country.
- a user may direct a templated voice of a loved one or someone else to read to the user.
- people of all ages it is possible for people of all ages to have books read to them in the voice of an absent or deceased family member or other person known to the user.
- this innovation alone will provide enormous benefit to users.
- This type of use has wide applications beyond the specific example just provided. Indeed, an even broader use of this technology in this manner is to have available a database of autho ⁇ zed and templated voices which may be accessible and useable by others for a fee or other form of compensation.
- this technology When used for music, this technology has similar profound implications, particularly if one can access templated voices of past and present singers of renown- many of whose voices are still available for templatmg.
- this technology enables a new industry of manufactu ⁇ ng, leasing, purchasing, or otherwise using voice templates and associated means, techniques and methods of conducting business therewith.
- the invention may also have utility in medical treatments for certain minor or major psychological ailments, for which proper use of templated voice therapy may be quite palliative or even therapeutic.
- Yet another possible use of this technology is to create a newly designed voice for use, but one which has a basis or precursor in one or more templated voices from actual mammalian o ⁇ gm Ownership and further use of the newly created voice may be controllable under va ⁇ ous means or legal enforcement, such as licensing or royalties and the like.
- Such voices may be retained as p ⁇ vate possessions for limited use by the creator as well.
- Such voices will represent the creative aspirations of the creator, but each voice will actually have a component or strain of actual mammalian voice as a basis through use of the templatmg tool or code, similar to a strand of tissue DNA but applicable to a specific voice.
- This type of combination presents powerful new communication capabilities and relationships based on voice and other sounds created by mammals.
- Systems according to the invention may be handheld or of other size. Systems may be embedded in other systems or may be stand alone in operation. The systems and methods herein may have part or all of the elements in a dist ⁇ ubbed, network or other remote system of relationship. Systems and methods herein may utilize downloadable or remotely accessible data, and may be used for control of va ⁇ ous other systems or methods or processes.
- Embodiments of the invention include exposed interface routines for requesting and implementing the methods and operations disclosed herein but which may be earned out m whole or in part by other operating or application systems.
- the templatmg process and the use of templated voices may be accomplished and used by either mammals or artificial machines or processes. For example, a bot or other intelligent aide may create or use one or more templated voices of this type.
- Such an aide may also be utilized to search for voices automatically according to certain general or limited cntena, and may then generate templated voices m voice facto ⁇ es, either virtual or physical. In this manner, large databases of templated voices may be efficiently created.
- it may be desirable to create and apply data or other types of tagging and identification technology to one or more portions of the actual voice utilized to create a templated voice.
- a templatmg process using elements of the embodiments herein yields a voice coding signal, comprising the logic structure of characteristics of a specific voice essential for accurately replicating the sound of that voice.
- Example 3 A home energy monitor, reporter, or mate, using one or more selected voices using the technology herein.
- a hotel room assistant, or automobile assistant to prompt the user according to desired prompting such as for example a wake-up call in a hotel in the voice selected by the user.
- desired prompting such as for example a wake-up call in a hotel in the voice selected by the user.
- an operator of a vehicle might receive information m the voice or voices selected by the user.
- Example 8 Using the voice template technology in combination with other visual media, such as with a photograph, digital video or a holographic image.
- Example 8 Using the voice template technology in combination with other visual media, such as with a photograph, digital video or a holographic image.
- a personal device that scans and updates downloadable information for a user as desired in voice or voices of one's choosing. For example, this may be useful for organizing actions capable of being done by a bot, such as an mfo-bot for background searching and interface while the user is not available and then reporting status to the user in one or more designated voices using the technology herein.
- a bot such as an mfo-bot for background searching and interface while the user is not available and then reporting status to the user in one or more designated voices using the technology herein.
- a safety reminder when used with one or more components of gear or equipment in the workplace, such as a personal computer posture monitor, elect ⁇ cal equipment, dangerous equipment, etc.
- voice activated systems such as dictation devices, as prompts, companions, or text readers.
- Example 14 Using the technology disclosed herein use as social mediation or control mechanisms, such as a tool against road rage or other forms of anger and frustration, activatable by driver or automatically, or by other means.
- Example 15 Using the technology disclosed herein as a teaching tool m home, school or the workplace.
- Example 16 Using the technology disclosed herein as a teaching tool m home, school or the workplace.
- Example 17 Using the technology disclosed herein as a tool to act as a family history machine.
- VoiceSelectTM brand of movie or video match technology uses as a VoiceSelectTM brand of movie or video match technology to utilize preferred voices for templatmg of entertainment scnpt already used by the o ⁇ gmal performer or subsequently created for voice template technology combination uses
- an "alter ego” device such as a handheld unit which engages on "SelectVoiceTM” brand or “VoiceXTM” brand mode(s) of operation and has a database of images of those who match the voice as well as anonymous models which can be selected, similar to that referred to m Example 7.
- Using the technology disclosed herein use as a bedtime reader or a night mate m a dwelling for momtonng and interactive secunty.
- Figure 2 is a flow diagram of one embodiment of a voice capture subsystem which may comp ⁇ se computer readable code means or method for accomplishing the capture, analysis and use of a voice AA designated for templatmg.
- Figure 3 is one embodiment of a voice analysis subsystem which may comp ⁇ se logic or method means for efficiently determining voice data charactenzation routing.
- voice AA is captured in acquisition module or step 103 and then routed by logic steps and data conductive pathways, such as pathway 106, through the templatmg process. Capture may be accomplished by either digital or analog methods and components.
- the signal which then represents captured voice AA is routed through analysis means 111 or method to determine whether an existing voice profile or template matches voice AA.
- This may be accomplished, for example, by compa ⁇ ng one or a plurality of characte ⁇ stics (such as those shown in voice characte ⁇ zation subsystem 113 of Figure 4) as determined by either acquisition module 103 or analysis means 111, and then compa ⁇ ng those one or more charactenstics with known voice profiles or templates available for access, such as at analysis step 111.
- Representative feedback and initial analysis loop 114 facilitates these steps, as does pathway 116.
- Such comparison may include querying of a voice profile database or other storage medium, either locally or remotely.
- the analysis step at analysis module 11 1 and voice characte ⁇ zation subsystem 113 may be repeated accordmg to algorithmic, statistical or other techniques to affirm whether the voice being analyzed does or does not relate or match an existing voice profile or data file.
- Figure 4 provides further detail of voice characte ⁇ zation subsystem 113.
- the signal corresponding to voice AA does not have a match or is not identified with an existing voice profile set then the signal is routed to the voice charactenzation subsystem for comprehensive characte ⁇ zation.
- creation of a template may not be required at module/step 127.
- the signal might be analyzed and or charactenzed for possible generation of a revised profile or template- which itself may then be stored or applied. This situation might occur, for example, when additional characte ⁇ zation data is available (such as size of enabling portion, existence or lack of stress, or other factors) which had not been previously available.
- a specific voice data file might comp ⁇ se a plurality of templates.
- This is a validation process, having logic steps and system components shown generally at validation subsystem 133 m Figures 2 and 3. It is emphasized that, as to relational location to subsystems and components, these Figures are generally schematic. Also, as shown in Figure 3, after determination that a voice profile data file exists (step 137), then the validation logic at step 139 will, optionally, occur. If a revision of an existing template is merited, then it is generated at step 142. Alternatively, logic step 145 notes that no revision to an existing template is to be made. Following either steps 143 or 145, then the new, revised, or previous voice profile or template is stored or used at step 155.
- the template creation module/step 127 of Figure 2 comprises utilizing the voice characte ⁇ zation subsystem to create a unique identifier, preferably a digital identifier, for that specific voice being templated or profiled.
- This data is similar, m the abstract, to genetic codes, gene sequence codes, or bar codes, and like identifiers of singularly unique objects, entities or phenomena. Accordingly, applicants refer to this voice profile or template as "Voice Template TechnologyTM” as well as “Voice DNATM or VDNATM” and “Voice Sequence CodesTM or Voice Sequence CodingTM”
- the terms "Profile, Profiles or Profiling” and derivative terms may be substituted in the above trademark or other reference terms for this new technology.
- Figure 4 is a schematic representation of a voice characte ⁇ zation subsystem.
- This disclosure comprises at least one embodiment of charactenzation data and means for determining and charactenzmg salient data to define a voice using voice templatmg or profiling, as disclosed herein. As shown, various types of data is available for comparison in formulating the characte ⁇ zation data. This characte ⁇ zation data will then be used to create the voice template or profile according to coding criteria.
- data in Figure 4 appears to be arranged in discreet modules, an open comparator process may be preferred m which any data may be accessed for companson in any of va ⁇ ous sequences or weighted p ⁇ onties.
- data may comp ⁇ se the catego ⁇ es of language, gender, dialect, region, or accent (shown as "Voice Characte ⁇ stics" output signal VC 0 at module or step 201); frequency, pitch, tone, duration, or amplitude (shown as output signal VCi at module or step 203); age, health, pronunciation, vocabulary, or physiology (shown as output signal VC 2 at module or step 205); patterns, syntax, volume, transition, or voice type (shown as output signal VC 3 at module or step 207); education, expe ⁇ ence, phase, repetition, or grammar (shown as output signal VC 4 at module or step 209); occupation, nationality, ethnicity, custom or setting (shown as output signal VC 5 at module or step 211); context
- VC X encompasses any known catego ⁇ zation technique at the time of interpretation, regardless of mention herein, provided it is useful m then defining a unique voice profile or template for a specific voice- and is used according to the novel teachings disclosed herein.
- data combined in voice characte ⁇ stic files and output signals VC 0 , VC,, VC 2 , VC 3 , VC 4 , VC 5 , VC 6 , VC 7 , VC 8 , VC 9 , VC, 0 , VC complicat, VC I2 , and VC X may be p ⁇ o ⁇ tized and combined in va ⁇ ous ways in order to accurately and efficiently analyze and charactenze a voice, with VC X representing still further techniques incorporated herein by reference.
- Figures 5 and 6 illustrate an exemplary signal bundler suitable for receiving the vanous voice charactenstic data, such as digital or coded data representative of the information deemed relevant and formative of the voice being templated.
- the signal bundler 316 then combines the output of signal content module or step 332 and values/sconng from one or more signals VC 0 - VC and formats the signal or code at module or step 343 as approp ⁇ ate for proper transfer and use by va ⁇ ous potential user interfaces, devices or transmission means to create an output voice template, code, or signal VT X .
- Figure 7 is a representative organization and method of an electronic query and transfer between a voice template generation or storage facility 404 and a remote user.
- enabling portions may be sent to a remote voice template generation or storage facility 404 by any number of various users 410, 413, 416.
- the facility 404 then generates or ret ⁇ eves a voice template data file and creates or ret ⁇ eves a voice template signal.
- the template signal is then transmitted or downloaded to the user or its designee, shown at step 437.
- the template signal is formatted for approp ⁇ ate use by a destination device, including activation instructions and protocols, shown at step/module 457.
- Figure 8 is a schematic representation of a mobile medium, such as a card, disk, or chip on which are essential components, depending on the user mode and need, for utilizing voice template technology.
- a hotel door card 477 may be provided at check-in to a hotel by a traveler.
- additional features incorporating aspects of this invention may be made available.
- a schematic representation of optional features withm such a card include means 481 for receiving and using a voice template for a voice or voices selected by the traveler for va ⁇ ous purposes during the traveler's stay at the hotel.
- such features may include a template receiving and storage element 501, a noise generator or generator circuitry 506, a central processmg unit 511, input/output circuitry 515, digital to analog/analog to digital elements 518, and clock means 521.
- vanous other elements may be utilized, such as voice compression or expansion means- such as those known in the cellular phone industry, or other components to enable the card to function as desired.
- voice compression or expansion means such as those known in the cellular phone industry, or other components to enable the card to function as desired.
- the user may then enjoy dialog or interface with inanimate devices within the hotel m the vo ⁇ ce(s) selected by the traveler. Indeed, a traveler profile may even retain such voice preference information, as approp ⁇ ate, and certain added billings or benefits may accrue through use of this invention.
- Figure 9 is a depiction of a photograph 602 which is configured for interactive use of voice template technology with voice JJ attributable to figure F JJ and voice KK attributable to figure F .
- Means are combined with the frame 610 or other structure, whether computer readable code means or simple three dimensional material, for interfacing the subjects or objects of the photo (or other media) with the appropnate voice templates to recreate a dialogue that either likely occurred or could have occurred, as desired by the user.
- va ⁇ ous means and methods exist to capture, analyze, and synthesize real and artificial voice components.
- the following United States patents, and their cited or listed references illustrate a few of the means for captu ⁇ ng, synthesizing, translating, recognizing, characte ⁇ zmg or otherwise analyzing voices, and are incorporated herein m their entirety by reference for such teachings: 4,493,050; 4,710,959; 5,930,755; 5,307,444; 5,890,117; 5,030,101; 4,257,304; 5,794,193; 5,774,837; 5,634,085; 5,704,007; 5,280,527; 5,465,290; 5,428,707; 5,231,670; 4,914,703 4,803,729; 5,850,627; 5,765,132; 5,715,367; 4,829,578; 4,903,305; 4,805,218; 5,915,236; 5,920,836 5,909,666;
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Machine Translation (AREA)
- User Interface Of Digital Computer (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
L'invention concerne des systèmes et des procédés destinés à capturer (103) une partie de validation d'une voix puis à créer un modèle de voix (127) ou un signal de profil qui peuvent être combinés à un moment ultérieur avec du bruit d'origine étrangère afin de reconstituer la voix d'origine. Cette voix reconstituée peut ensuite être utilisée pour dire n'importe quelle forme ou n'importe quel contenu fourni sous forme d'entrée numérique et pour dire un contenu qui n'a pas été dit sous sa forme d'origine par la voix d'origine. L'invention concerne aussi des produits et procédés pour l'utilisation en ligne ainsi que certaines applications commerciales et industrielles.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16716899P | 1999-11-23 | 1999-11-23 | |
US167168P | 1999-11-23 | ||
PCT/US2000/032328 WO2001039180A1 (fr) | 1999-11-23 | 2000-11-23 | Systeme et procede pour creer des modeles de voix humaines determinees |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1252620A1 true EP1252620A1 (fr) | 2002-10-30 |
Family
ID=22606225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP00983768A Withdrawn EP1252620A1 (fr) | 1999-11-23 | 2000-11-23 | Systeme et procede pour creer des modeles de voix humaines determinees |
Country Status (13)
Country | Link |
---|---|
EP (1) | EP1252620A1 (fr) |
JP (1) | JP2003515768A (fr) |
KR (1) | KR20020060975A (fr) |
CN (1) | CN1391690A (fr) |
AP (1) | AP2002002524A0 (fr) |
AU (1) | AU2048001A (fr) |
BR (1) | BR0015773A (fr) |
CA (1) | CA2392436A1 (fr) |
EA (1) | EA004079B1 (fr) |
IL (1) | IL149813A0 (fr) |
NO (1) | NO20022406L (fr) |
WO (1) | WO2001039180A1 (fr) |
ZA (1) | ZA200204036B (fr) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7697827B2 (en) | 2005-10-17 | 2010-04-13 | Konicek Jeffrey C | User-friendlier interfaces for a camera |
US8155964B2 (en) * | 2007-06-06 | 2012-04-10 | Panasonic Corporation | Voice quality edit device and voice quality edit method |
US9240182B2 (en) * | 2013-09-17 | 2016-01-19 | Qualcomm Incorporated | Method and apparatus for adjusting detection threshold for activating voice assistant function |
US9552810B2 (en) | 2015-03-31 | 2017-01-24 | International Business Machines Corporation | Customizable and individualized speech recognition settings interface for users with language accents |
RU2617918C2 (ru) * | 2015-06-19 | 2017-04-28 | Иосиф Исаакович Лившиц | Способ формирования образа человека с учетом характеристик его психологического портрета, полученных под контролем полиграфа |
KR101963195B1 (ko) * | 2017-06-21 | 2019-03-28 | 구동하 | 사용자 음성을 이용한 생리 주기 결정 방법 및 이를 실행하는 서버 |
US11093554B2 (en) | 2017-09-15 | 2021-08-17 | Kohler Co. | Feedback for water consuming appliance |
US11314215B2 (en) | 2017-09-15 | 2022-04-26 | Kohler Co. | Apparatus controlling bathroom appliance lighting based on user identity |
US11099540B2 (en) | 2017-09-15 | 2021-08-24 | Kohler Co. | User identity in household appliances |
US10448762B2 (en) | 2017-09-15 | 2019-10-22 | Kohler Co. | Mirror |
US10887125B2 (en) | 2017-09-15 | 2021-01-05 | Kohler Co. | Bathroom speaker |
CN109298642B (zh) * | 2018-09-20 | 2021-08-27 | 三星电子(中国)研发中心 | 采用智能音箱进行监控的方法及装置 |
KR102466736B1 (ko) * | 2021-06-18 | 2022-11-14 | 주식회사 한글과컴퓨터 | 사용자에 의해 입력된 음성을 기초로 본인 인증을 수행하는 음성 기반의 사용자 인증 서버 및 그 동작 방법 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5007081A (en) * | 1989-01-05 | 1991-04-09 | Origin Technology, Inc. | Speech activated telephone |
US5594789A (en) * | 1994-10-13 | 1997-01-14 | Bell Atlantic Network Services, Inc. | Transaction implementation in video dial tone network |
US5717828A (en) * | 1995-03-15 | 1998-02-10 | Syracuse Language Systems | Speech recognition apparatus and method for learning |
US5774841A (en) * | 1995-09-20 | 1998-06-30 | The United States Of America As Represented By The Adminstrator Of The National Aeronautics And Space Administration | Real-time reconfigurable adaptive speech recognition command and control apparatus and method |
-
2000
- 2000-11-23 KR KR1020027006630A patent/KR20020060975A/ko not_active Application Discontinuation
- 2000-11-23 CA CA002392436A patent/CA2392436A1/fr not_active Abandoned
- 2000-11-23 AP APAP/P/2002/002524A patent/AP2002002524A0/en unknown
- 2000-11-23 IL IL14981300A patent/IL149813A0/xx unknown
- 2000-11-23 CN CN00816092A patent/CN1391690A/zh active Pending
- 2000-11-23 BR BR0015773-2A patent/BR0015773A/pt not_active IP Right Cessation
- 2000-11-23 EP EP00983768A patent/EP1252620A1/fr not_active Withdrawn
- 2000-11-23 JP JP2001540763A patent/JP2003515768A/ja active Pending
- 2000-11-23 AU AU20480/01A patent/AU2048001A/en not_active Abandoned
- 2000-11-23 EA EA200200587A patent/EA004079B1/ru not_active IP Right Cessation
- 2000-11-23 WO PCT/US2000/032328 patent/WO2001039180A1/fr not_active Application Discontinuation
-
2002
- 2002-05-21 ZA ZA200204036A patent/ZA200204036B/xx unknown
- 2002-05-21 NO NO20022406A patent/NO20022406L/no unknown
Non-Patent Citations (1)
Title |
---|
See references of WO0139180A1 * |
Also Published As
Publication number | Publication date |
---|---|
EA004079B1 (ru) | 2003-12-25 |
CA2392436A1 (fr) | 2001-05-31 |
AU2048001A (en) | 2001-06-04 |
EA200200587A1 (ru) | 2002-10-31 |
ZA200204036B (en) | 2003-08-21 |
IL149813A0 (en) | 2002-11-10 |
WO2001039180A1 (fr) | 2001-05-31 |
JP2003515768A (ja) | 2003-05-07 |
BR0015773A (pt) | 2002-08-06 |
CN1391690A (zh) | 2003-01-15 |
AP2002002524A0 (en) | 2002-06-30 |
NO20022406D0 (no) | 2002-05-21 |
NO20022406L (no) | 2002-07-12 |
KR20020060975A (ko) | 2002-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20020072900A1 (en) | System and method of templating specific human voices | |
US12050574B2 (en) | Artificial intelligence platform with improved conversational ability and personality development | |
US10381016B2 (en) | Methods and apparatus for altering audio output signals | |
Gold et al. | Speech and audio signal processing: processing and perception of speech and music | |
JP2023501074A (ja) | ユーザ用の音声モデルを生成すること | |
US20050108011A1 (en) | System and method of templating specific human voices | |
CN107516511A (zh) | 意图识别和情绪的文本到语音学习系统 | |
CN106847258A (zh) | 用于共享调适语音简档的方法和设备 | |
WO2022184055A1 (fr) | Procédé et appareil de lecture de parole pour article, et dispositif, support de stockage et produit programme | |
WO2001039180A1 (fr) | Systeme et procede pour creer des modeles de voix humaines determinees | |
Yu et al. | BLTRCNN-based 3-D articulatory movement prediction: Learning articulatory synchronicity from both text and audio inputs | |
CN114048299A (zh) | 对话方法、装置、设备、计算机可读存储介质及程序产品 | |
JP2003271182A (ja) | 音響モデル作成装置及び音響モデル作成方法 | |
Ramati | Algorithmic Ventriloquism: The Contested State of Voice in AI Speech Generators | |
CN112885326A (zh) | 个性化语音合成模型创建、语音合成和测试方法及装置 | |
WO2004008295A2 (fr) | Systeme et procede pour une analyse medicale des caracteristiques de la voix | |
JP2024533345A (ja) | バーチャルコンサートの処理方法、処理装置、電子機器およびコンピュータプログラム | |
JP4840476B2 (ja) | 音声データ作成装置および音声データ作成方法 | |
Shah et al. | Voice Input based Attendance System | |
Lee et al. | The Sound of Hallucinations: Toward a more convincing emulation of internalized voices | |
US20240361827A1 (en) | Systems, Methods, And Devices to Curate and Present Content and Physical Elements Based on Personal Biometric Identifier Information | |
CN115132204B (zh) | 一种语音处理方法、设备、存储介质及计算机程序产品 | |
Own et al. | The Individual Perception in Synthetic Speech | |
Midtlyng et al. | Voice adaptation by color-encoded frame matching as a multi-objective optimization problem for future games | |
JP4356334B2 (ja) | 音声データ提供システムならびに音声データ作成装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20020621 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK PAYMENT 20020621;RO;SI |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20020625 |