EP1847102A1 - Systeme et procede destines a generer et a repartir des supports personnalises - Google Patents

Systeme et procede destines a generer et a repartir des supports personnalises

Info

Publication number
EP1847102A1
EP1847102A1 EP05711731A EP05711731A EP1847102A1 EP 1847102 A1 EP1847102 A1 EP 1847102A1 EP 05711731 A EP05711731 A EP 05711731A EP 05711731 A EP05711731 A EP 05711731A EP 1847102 A1 EP1847102 A1 EP 1847102A1
Authority
EP
European Patent Office
Prior art keywords
clip
media
insert
master
personalized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05711731A
Other languages
German (de)
English (en)
Other versions
EP1847102A4 (fr
Inventor
Frederick Lowe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP1847102A1 publication Critical patent/EP1847102A1/fr
Publication of EP1847102A4 publication Critical patent/EP1847102A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing

Definitions

  • Embodiments of the invention pertain to the field of computer systems. More particularly, the invention is directed to a system and method for generating and distributing personalized media using computers.
  • Modern systems generate and utilize multimedia data in a plurality of different ways. For example, users can currently communicate information to and hear responses from systems that generate audio data and transmit that data back to the user over the telephone.
  • existing systems utilize a mapping between one form of data (e.g. numerical information or text data) and a set of audio files to generate an audio file for playback.
  • data e.g. numerical information or text data
  • the system at the bank may, for example, obtain a user's account information via touchtone input and audibly playback that users account information for purposes of confirmation.
  • Existing systems for building and distributing such audio files use the input to map to a set of prerecorded audio tracks and assemble a message for playback. The end result is often times an awkward sounding message that fails to seamlessly integrate the prerecorded audio tracks, hi addition to obvious transitions between clips, gaps of silence and intonation differences make this type of personalized media unagreeable to the human ear.
  • Existing solutions do not provide a way to generate an audio file that seamlessly integrates a plurality of audio files in a way that makes the generated file sound like an original recording with undetectable transitions, rather than a computer generated message.
  • current systems do not personalize the content of the generated audio file based on user information automatically obtained from the device or software program utilized to access the system and/or context information associated with the user. For example, current systems do not provide a mechanism for automatically generating and disseminating a personalized ring tones or ringback tones in real-time or in an asynchronous manner.
  • One or more embodiments of the invention are directed to a system and method for generating and distributing personalized media.
  • Personalized media is generated via one or more embodiments of the invention by selecting one or more master clips having predefined gaps, obtaining insert data (e.g., an insert clip), seamlessly merging the insert data into the selected master clip to generate a media clip with undetectable transitions between spliced clips.
  • Personalized media is distributed via one or more embodiments of the invention by optionally adding dynamic context data to the media clip, formatting the media clip for a given client device and transmitting the resulting media clip to a client device. This distribution process may be in real-time or delayed until the media clip is requested at a later time.
  • Embodiments of the invention may utilize any computing environment from single processor computing systems to highly optimized multi-threaded server processes comprising seamless splicing of compressed media or any other architecture capable of achieving the desired scalability.
  • An insert clip may contain any type of data. In most instances, however, the insert clip is utilized for purposes of adding variables such as a name, place, time, gender, product name or any other desirable information to a master clip.
  • the integration between the master clip and the insert clip is seamless meaning that there are no human detectable transitions between insertion points in the media clip. Regardless of the size of the insert clip the finished media clip lacks any noticeable gaps or intonation changes. Even though the media clip is generated using a plurality of different clips, the media clip sounds as if it was originally recorded in one take. Flash animation or other types of multimedia data such as video can be added to the media clip to enhance the user experience during playback.
  • An insert clip comprising dynamic context data may include for example a time, date, location, temperature or any other information not available until the time of delivery.
  • the contents of the master clip and/or the insert clip may use any voice including the originator's voice, on many occasions celebrity voices or the voices of celebrity impersonators are utilized.
  • the master clip for instance, might be recorded by the celebrity and the insert clip recorded using a voice over artist.
  • embodiments of the invention provide a mechanism for generating and distributing personalized media clips using what sounds like and/or is the voice of a celebrity. For instance, once the system merges one or more master clips together with one or more insert clips and thereby generates the media clip, the system can provide the media clip to a device and/or program for real-time playback or asynchronous pickup for later playback.
  • embodiments of the invention may use computer synthesized and/or TTS (text to speech) software of varying complexity in order to simulate voices.
  • Playback of the media clip initiates at a number of different types of devices and can be triggered by a multitude of different events.
  • Some examples of the types of playback devices (also known herein as destination clients) used in accordance with one or more embodiments of the invention include (but are not limited to) a computational device configured to access a network (e.g., the World Wide Web (WWW)) via a browser, an email client, or some other network interface.
  • WWW World Wide Web
  • a cell phone or any other type of portable or non-portable device configured to output media clips (e.g., audio, video, etc...) may also function as a playback device.
  • a cell phone may obtain a personalized ring tone, personalized ringback tone, or personalized media clip for use with regards to any cell phone function via a cell gateway server.
  • a cell phone itself may create a media clip locally for playing back personalized media in one or more embodiments of the invention during ring, ringback or at any other time. Any combination of generation of personalized media on a telephone server or locally is in keeping with the spirit of the invention.
  • playback devices may comprise for example a gasoline pump identifying a user via an RFID based device, such as SpeedPass®, a cash register comprising a credit card reader for identifying a customer, a slot machine or a hotel door comprising a hotel room key reader for identifying a guest, or any other device capable of identifying a user and playing back a media clip.
  • an RFID based device such as SpeedPass®
  • a cash register comprising a credit card reader for identifying a customer
  • a slot machine or a hotel door comprising a hotel room key reader for identifying a guest
  • any other device capable of identifying a user and playing back a media clip.
  • Figure 1 illustrates an architectural view of an embodiment of the invention for generating and distributing personalized media.
  • Figure 2 illustrates a method for generating personalized media in accordance with one or more embodiments of the invention.
  • Figure 3 illustrates a method for distributing personalized media in accordance with one or more embodiments of the invention.
  • Figure 4 illustrates a block diagram representing the elements of one or more media clips configured in accordance with one or more embodiments of the invention.
  • Figure 5 illustrates a flow chart showing the interactions of the system components for a telephonic ring tone and/or ringback tone embodiments of the invention.
  • Figure 6 illustrates an application comprising static clips comprising the master clip and insert clips otherwise known as tokens for insertion into slots or insert points of a given duration of the master clip.
  • One or more embodiments of the invention are directed to a system and method for generating and distributing personalized media.
  • Personalized media is generated via one or more embodiments of the invention by selecting one or more static clips comprising a master clip having predefined gaps or slots, obtaining insert data (e.g., an insert clip), seamlessly merging the insert data into the selected master clip to generate a media clip with undetectable transitions between spliced clips.
  • Personalized media is distributed via one or more embodiments of the invention by optionally adding dynamic context data to the media clip, formatting the media clip for a given client device and transmitting the resulting media clip to a client device. This distribution process may be in realtime or delayed until the media clip is requested at a later time.
  • Embodiments of the invention may utilize any computing environment from single processor computing systems to highly optimized multi-threaded server processes comprising seamless splicing of compressed media or any other architecture capable of achieving the desired scalability.
  • Figure 6 shows an "application”, also known as a template specifying static segments that remain the same from application to application, slots or insert points with fixed or variable length durations with tokens or insert clips for insertion into the master clip.
  • application also known as a template specifying static segments that remain the same from application to application, slots or insert points with fixed or variable length durations with tokens or insert clips for insertion into the master clip.
  • An insert clip may contain any type of data. In most instances, however, the insert clip is utilized for purposes of adding variables such as a name, place, time, gender, product name or any other desirable information to a master clip.
  • the integration between the master clip and the insert clip is seamless meaning that there are no human detectable transitions between insertion points in the media clip. Regardless of the size of the insert clip the finished media clip lacks any noticeable gaps or intonation changes. Even though the media clip is generated using a plurality of different clips, the media clip sounds as if it was originally recorded in one take. Flash animation or other types of multimedia data such as video can be added to the media clip to enhance the user experience during playback.
  • An insert clip comprising dynamic context data may include for example a time, date, location, temperature or any other information not available until the time of delivery.
  • embodiments of the invention provide a mechanism for generating and distributing personalized media clips using what sounds like and/or is the voice of a celebrity. For instance, once the system merges one or more master clips together with one or more insert clips and thereby generates the media clip, the system can provide the media clip to a device and/or program for real-time playback or asynchronous pickup for later playback.
  • embodiments of the invention may use computer synthesized and/or TTS (text to speech) software of varying complexity in order to simulate voices.
  • a cell phone or any other type of portable or non-portable device configured to output media clips (e.g., audio, video, etc...) may function as a playback device.
  • Embodiments of the invention may generate and distribute personalized ring tones and ringback tones based on caller identification and dialed caller party number.
  • An example media or ring clip could utilize a celebrity voice to announce "[user name] your [relative type] is calling", where [user name] is the user's name spoken in the voice of a celebrity and [relative type] is selected from the list of ⁇ brother, mother, father, son, etc. ⁇ .
  • ringback tones comprising the sound played to the calling user when waiting for another user to answer the phone may be personalized based on the calling party, i.e., using the caller identification number.
  • the telephone server playing the ringback tone may query the personalized clip to play back utilizing the ANI and DNIS, i.e., caller identification and dialed called party number respectively.
  • the cell gateway also known as a portal operator
  • any other telephone computer system or server coupled with the cell gateway may generate the resulting ring tone that plays on the receiving cell phone when it is ringing, or the resulting ringback tone that plays on the initiating cell phone when waiting for another user to answer.
  • generating the ring tones, ringback tones and messages with personalized variables locally on the cell phone itself is in keeping with the spirit of the invention.
  • An embodiment of the invention allows for an RFID based device, such as SpeedPass® to provide a unique identification to a RFID reader which in turn provides for a personalized message to be played back by a gas pump electronic interface unit, which in this case would be the playback device.
  • the gas station local server, or company main server may contain the personalized variable information.
  • the unique identification is presented to either server, the resulting output media clip may be constructed on either server and played on the gas pump electronic interface unit.
  • Bluetooth devices in the vehicle or coupled with the user may also play back the output media clip if the gas pump electronic interface unit is configured with WiFi or other wireless technologies configured to request media output.
  • Another embodiment of the invention enables a playback device such as a kiosk for purchasing plane tickets or groceries to identify and play personalized media messages to a user.
  • Additional examples of playback devices used in embodiments of the invention include loyalty card readers, ATM machines, GPS devices in planes and cars.
  • Hotel electronic doors are another example playback device where the insertion of an electronic key into the guest's door plays a message such as "Welcome [title] [user surname]” with title and user surname set to "Ms.” and "Smith” respectively in this example.
  • Playback devices may connect to embodiments of the invention comprising computational resources or if the playback device itself has enough computational power and storage comprising personalized information or can obtain the personalized information from an identifier associated with the user, may act as an embodiment of the invention in terms of constructing and playing the personalized media clip
  • the hotel electronic door may comprise a network connection to the hotel's computing system. This connection may be wireless or wired.
  • the hotel computing system in this example may detect the electronic key or credit card-like magnetic key and determine the identification of the hotel guest.
  • the personalized message comprising the "Welcome [title] [user surname]" media clip would then be generated on the hotel's computing system, sent to the electronic door and played on small speaker constructed into the electronic door.
  • Another playback device may be a credit card reader configured to play back a personalized message to a shopper after the user identifies themselves with the credit card.
  • media output in this case may include a Flash animation with the user's name and an audio track with the phrase, "Welcome [user name], your current purchase is missing your [time period] buy of [product name]", where [user name], [time period] and [product name] are insert clips that seamlessly combine with the master clip to create the output media clip.
  • Another example playback device may be a slot machine capable of identifying the user via credit card, RFID or hotel room key.
  • the slot machine could play a message such as "[User name], you just won [winning amount] dollars!, hi this example, the slot machine may be networked to a server comprising the computational power and requisite personalization clips to create the output media clip or the slot machine itself may obtain an identifier associated with the user and construct the media clip itself.
  • Another example playback device may be a digital cable set-top box where personalization occurs on a cable system server and is sent to the IP address of the cable box or uses the subscriber ID in order to encode a message on a data channel.
  • Another example playback device may be a toy which may be personalized at the factory at on-line purchase time or at home through a network connection or through a wireless interface to a local computer with a network connection or configured to run as an embodiment of the invention, hi the case of internet shopping, the purchaser may choose the personalization clips that are to be inserted into the toy before shipping. For example, this would allow the toy to sound like a famous cartoon character and would arrive at the child preloaded. With inexpensive network devices available, network capable toys would be able to be dynamically loaded with personalized output media clips. Toys containing processing units would be able to switch output media clips based on accelerometers that could be used in order to determine if the older or younger sibling was playing with the toy.
  • the toy may cry out, "[user name] be nice to me", where [user name] would be the rougher of the two children in this example.
  • Context information may be used in this embodiment of the invention as set by the parent. Encryption may be utilized within the media clip holding portion of the device in order to prevent hackers from creating toys with unwanted sounds, words or gestures.
  • the time at which playback initiates depends upon the context of the device. Displaying a certain website, reading a particular email, calling a particular person, or being in a certain location are some of the examples of the different contexts that might trigger playback. These non-personal events or values may cause branching in determining what clips to splice together for final playback. For instance, a user of the system might initiate playback by visiting a certain web page (or some other type of online document or program) where the users will hear a personalized greeting from a celebrity. If, for example, the user visits an online bookstore, that user might receive a personal greeting from one of the user's favorite authors who then proceeds to promote his newest novel.
  • embodiments of the invention may take branching actions based on this context information.
  • Other examples include personalized messages via email, a cell phone or some other playback device.
  • a timer function or calendar function may initiate a media clip transmission.
  • Another example context function producing a asynchronous initiation of a media clip without user intervention may include a location context whereby a GPS receiver in a phone or car initiates a media message based on location. Any non-personalized information or information source may be used as a context source.
  • HTTP is a stateless protocol and connections are generated when needed by a requesting device, therefore, devices accessing embodiments of the invention over this protocol must employ different means in which to recognize asynchronous notification such as polling or maintaining an open connection over a separate communications protocol.
  • Embodiments of the invention would, for example, allow a manager to notify all members of his or her team in a personalized manner that there was a meeting on Monday, saving many phone messages.
  • the master clip could in this example could be recorded and saved on a cell phone with each persons name recorded on the cell phone as well.
  • Embodiments of the invention may contain software interfaces allowing the user to in effect produce the master clip by holding a given button when recording the master clip and assert another button when recording each variable insert clip. Alternatively, the user could simply access save bulk personalization messages and send them en masse when needed as in the case of staff meetings.
  • Embodiments of the invention may alternatively operate without manager intervention whereby the group to be invited to the staff meeting is contained within a server and a calendar function on a management server sends personalized media clips to the attendees a predetermined amount of time before the meeting.
  • the media clip may be generated and automatically transmitted when the user visits a particular web page.
  • the invention contemplates the use of a variety of different techniques for dynamically generating media clips.
  • the system obtains user information from a cookie file to instantaneously render a personalized multimedia file.
  • user data is already known by the system or obtained and confirmed via a log-in process.
  • embodiments of the invention may utilize a database of user information to assemble the media clip.
  • a content provider that wishes to distribute a media clip e.g., a personalized advertisement or some other personalized media clip
  • the system utilizes the request, which identifies or contains at least one master clip to be readied for playback and contains type information associated with each of the locations where insert clips are to be merged into the master clip.
  • the type information is then utilized to obtain user information from a system database and the user information is in turn used to obtain relevant insert clips for purposes of generating a media file.
  • a ring tone or a ringback tone may be generated via an embodiment of the invention and stored on a telephone company server for play when ringing a user, or for playing to a user that is ringing another user in the case of ringback tones.
  • the information in the system may be queried by ANI and DNIS, caller identification and dialed called party respectively.
  • FIG. 1 illustrates an architectural view of an embodiment of the invention for generating and distributing personalized media.
  • Embodiments of the invention provide a user such as sender 100 with a way to generate and distribute media clips to one or more other recipients such as receiver 107.
  • sender 100 utilizes a client to connect with receiver 107.
  • a client is typically a computing device capable of communicating through a network with one or more types of networks.
  • An example client as shown in this embodiment of the architecture is cell phone 101 or 106.
  • the client may alternatively comprise a computing device such as a computer equipped with at least one processor, memory and storage media.
  • the computing device is equipped and configured to communicate using at least one network communication means.
  • a client may be equipped with a modem to communicate through (wire based or wave based wireless) telephone services or alternatively may be configured to communicate through one or more networking protocols such as Transmission Control Protocol (TCP) in combination with the Internet Protocol (IP) over the Internet.
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • Computing devices include cellular telephones, Personal Digital Assistants (PDA), desktop computers, laptop computers or any other electronic apparatus capable of communicating though a wire-based and/or wireless network may be utilized as a client.
  • a client may be a personal digital assistant equipped with a browser capable of rendering Hypertext Markup Language (HTML), a JAVA virtual machine capable of running applets received from a remote server, and any other computer program code that supports communication between the user and a remote machine.
  • Other applications allow the user to upload personal media clips such as an email client, data streaming service supported by the client, a HyperText Transport Protocol (HTTP) posting and any other means that allows a user to post media clips to media generation server 104.
  • HTTP HyperText Transport Protocol
  • Client 106 (also referred to as a playback device) comprises media player.
  • a client 106 may be a cell phone that allows one or more users to access a media database 103 to play media clips.
  • Other types of multimedia destination clients may consist of a desktop computer equipped with a multimedia player, a personal digital assistant and any other electronic device capable of playing a media clip or allowing access to a network location that delivers media clips.
  • a media server is designed to handle access to and the delivery of media clips and typically is capable of handling communication to senders and receivers for purposes of delivering media.
  • An example of a media server is cell gateway 102 or 105.
  • Other context information may be retrieved from a plurality of sources by the media server or media database (or other server coupled with the media database) such as calendar information, location information or any other information that may be utilized in inserting context information into a media clip.
  • Media server 105 is capable of connecting to third party servers (e.g., other websites), local or remote databases to collect context and/or media clips information.
  • Client 106 may also comprise a scheduler component in order to poll for media clips from media server 105.
  • Systems embodying the invention may optionally utilize media generation engine 104 to process media clips. For example, after media server 105 determines the context and the master and insert clips to use for generating the output media clips, media server 105 may communicate that information to media generation engine 104 so media generation engine 104 can retrieve the data for the media clips from one or more storage locations in media database 103. Media server 105 uses the input information to generate one or more media clips.
  • Media clip generation involves applying one or more processing algorithms to the input data. Typical processing involves merging/mixing, audio dubbing, inserting media clips and any other type of processing that takes one or more media clips and generating one or more new media clips based on context information.
  • Media server 105 may employ a highly optimized multi-threaded compressed media seamless splicing process in order to maximize the number of connections, network throughput and number of media clips that can be processed per media server 105 per unit time.
  • embodiments of the invention may employ a cache in order to further minimize the processing involved for repetitive access applications whereby each successive access avoids accessing media database 103 and the associated delays with accessing a database versus reading memory directly.
  • the cache may comprise a single concatenated media clip built out of the various static and insert clips, or may comprise the individual clips that are sent or constructed at real-time to produce the seamless output clip.
  • media database 103 is typically a commercial available or freeware relational database management system (RDBMS).
  • RDBMS freeware relational database management system
  • Storage locations may also be any file system accessible locally or through a network or a memory cache.
  • Systems embodying the invention may comprise media production software components running on client 101 or any other client accessible by sender 100, or on media generation engine 104.
  • a media production system allows a user to utilize newly recorded media clips, or existing media clips to edit the media clips and prepare the media clips for usage with embodiments of the invention.
  • the production phase is disclosed below in further detail, and involves producing media clips properties, attributes and symbols to allow, at a later stage, the multimedia generation engine to combine a plurality of media clips to generate an output one or more media clips.
  • a separate production system may be utilized independent of media generation engine 104. Regardless of the location that the production occurs, generation allows a producer to create clips using real life recording or computer generated media that include audio, video or any other electronic data format.
  • FIG. 2 illustrates a method for generating personalized media in accordance with one or more embodiments of the invention.
  • the process starts at 200.
  • the system obtains at least one master clip at 201 and obtains any associated insert clip(s) at 202.
  • the system determines the points within the master clip(s) where the insert cli ⁇ (s) are to be inserted.
  • the determination of the insert point(s) is accomplished by obtaining metadata detailing the location of the insert points or through any other method of describing the locations where the insert clips are to be inserted.
  • the master and insert clips are then seamlessly blended into a media clip in order to mask any transition points between the master and insert clips. This will be described more fully below.
  • the individual clips may be stored away for later access in a cache or database. If the media clip is to be transmitted immediately as determined at 205, then the media clip is distributed at 207 (see Figure 3). If the media clip is being created for an asynchronous pickup, for example as used in a ring tone or ringback tone for a cell phone, then the media clip is saved at 206. The system then waits to start the process over at 200 when another media clip generation is requested.
  • the invention contemplates the use of many different interfac'es (e.g., a web interface, email client, and/or any other type of device configured to execute playback of the media clip) there are some specific details and generalities associated with the use of each type of interface.
  • the web interface and/or email interface provides users with a way to access, through an interconnection fabric such as a computer network, one or more server sites.
  • the client and server system supports any type of network communication, including, but not limited to wireless networks, networking through telecommunications systems such as the phone system, optical networks and any other data transport mechanism that enables a client system to communicate with a server system.
  • the user interface also supports data streaming, as in the case of streaming multimedia data to a browser plug-in, a multimedia player, and/or any type of hardware device capable of playing multimedia data.
  • other embodiments of the invention may utilize web service interfaces, or may take advantage of peer-to-peer architectures for obtaining and splicing clips to one another and delivering them to one or a great number of users.
  • the user interface provides a mechanism for obtaining a unique identifier associated with each user that accesses the system. Any data item that uniquely identifies a user or device is referred to as a unique identifier.
  • the ANI and DNIS caller identification and dialed called party number respectively
  • the ANI and DNIS may be utilized to query media database 103 in Figure 1 for a media clip unique to the desired sender and receiver. This for example may be utilized in order to play a certain piece of music, or personalized message to a particular caller, but not to a different caller.
  • the overlay of an audible standard telephone ring may be performed in order for the sender to comprehend that the receiver's phone is actually ringing and that they are not on hold.
  • a serial number and/or a user name and password can act as a unique identifier and thereby provide access to the system while restricting unauthorized access.
  • the unique identifier is a cookie file containing user information (e.g., user name, age, and any other information about the user) or a URL or pointer to the appropriate user information. Once the system obtains the cookie information, that information is used for purposes of rendering a personalized multimedia file.
  • the system can utilize the information contained within the cookie file to determine which insert clip to associate with a master clip for purposes of rendering the media clip.
  • embodiments of the invention are configured to selectively determine the content of the multimedia data based on user information such as a user type, and user preferences.
  • the system may obtain master clips, insert clips, and/or other multimedia clips from a variety of locations.
  • locations include database storage systems, data files, network locations, hard drives, optical storage devices and any medium capable of storing data including but not limited to network resources comprising web services and peer-to-peer networks.
  • the storage location is a relational database system.
  • a database system may hold the master clips and/or insert clips used to generate the media clips and/or a variety of other data or metadata associated with each media clip.
  • the data associated with the media clip allows for categorizing, classifying and searching media clips based on attributes.
  • metadata further comprises information about the clip including insert points, variable names at insert points, durations, and other items.
  • Database systems may be configured to index data in the database for purposes of expediting the process of searching for specific information in the database.
  • the database may comprise multiple mirrors to enable the system to scale up to handle a large number of concurrent users.
  • Figure 3 illustrates a method for distributing personalized media in accordance with one or more embodiments of the invention starting at 300.
  • a media clip is obtained at 301 from media database 103 as shown in Figure 1. If a media clip requires dynamic context data to be inserted before transmitting as determined at 302, then the context clip is obtained at 303. The insertion points for the context clip or clips is/are determined at 304.
  • the context clip is seamlessly blended with the input media clip yielding a media clip comprising the context information at 305. If the media clip comprises no context information as determined at 302, or if context information is inserted then the media clip is formatted for the receiving playback device at 306. This may comprise formatting the media clip for a given audio format for example.
  • the media clip is then transmitted to the client at 307.
  • Context information may be obtained from any number of sources.
  • multimedia attributes may be obtained from a database system, time from a clock system, events information from a calendaring system, geographical information from a global positioning system and any other system capable of providing context information to embodiments of the invention.
  • Context information may combine attribute information and rule information to determine a means and time for initiating playback.
  • an event originating from a calendaring system may specify which delivery means to use for delivering the output media clip depending on time of the day, type of the event, events preceding (or succeeding) the event, or location of the user. If the user is online, playback may be via the web interface, or if the user is using email playback may be in the form of an email.
  • the playback may be redirected to a cellular phone.
  • the system may use other context attributes to determine exclusion rules between media clips. For example, insert media clips designed for use in certain contexts such as happy occasions, may only be used in some context categories and not others.
  • embodiments of the invention allow for providing an engine that may automatically handle tasks on behalf of persons.
  • Figure 4 illustrates a block diagram representing the elements of one or more media clips configured in accordance with one or more embodiments of the invention.
  • the horizontal axis is in increasing time order from left to right.
  • Master clip 410 contains any type of multimedia data including, but not limited to, audio and/or video.
  • One or more master clips can be merged together to create a media clip ready for playback.
  • Insert clip 420 can also contain any type of data (e.g., audio, video, etc).
  • the system may combine two or more media clips to form either a master clip or insert clip so long as the clips have at least one property in common. For example, an audio clip may be merged with a video clip if the audio track included with the video clip has the same characteristics as the audio clip to be inserted.
  • the clips may be normalized before combining. Clips with different lengths may be front or back-end truncated or cross sampled faster or slow in order to fit the clip within the desired slot.
  • the master clip may contain metadata stating that the time slot to fit an insert clip into is not fixed, meaning that the clips can simply be concatenated one after the other since there may not be background sound information which would cause a non-seamless splice to occur. This can also be thought of as appending master clips back to back, for example if no fixed time gap was left in a given master clip and another clip such as an insert clip is to be appended before yet another master clip.
  • the independent clips are seamlessly spliced in order to produce an output clip that is perceived as a single recorded clip.
  • the location where the system interleaves insert clip 420 with one or more master clips 410 is marked by a start and end point, or start point and duration.
  • the insert clip is recorded to use the entire duration between the start and end point, thereby allowing the insert clip to sound or appear seamlessly integrated with the master clip.
  • Obtaining a master clip or insert clip may involve recording a live performance (e.g., a commercial or an artistic performance by a band), or capturing computer synthesized sounds.
  • a producer identifies the clips that are to become master clips and edits the clips or the voice track of a clip or clips in order to leave gaps for dropping one or more insert clips.
  • the producer may also input attributes to describe the sounds or the images in the media clips.
  • Some examples of data that may serve as attributes are text keywords and key phrases, a sound clip preview, an image preview or any other data format that may characterize a media clip.
  • the producer also determines among all available media clips those that are designed to be insert clips.
  • Insert clips are fashioned in embodiments of the invention to be inserted or mixed at one or more locations in one or more media clips (e.g., master clips), hi some instances insert clips are artfully recorded to fill a predetermined duration of time. If a master clip leaves a gap of 3 seconds to place a person's name, the insert clip is recorded to fill up the entire 3 seconds. Thus, the underlying music track seamlessly integrates the master clip together with the insert clip.
  • An insert clip may itself be a master clip, if the insert clip is designed for mixing with other media clips.
  • the system also provides a mechanism for associating insert clips with keywords, key phrases, sound preview, image preview and any other data format that allow the system to identify, classify, sort or other manipulate the insert clip for purposes of data management, this information is commonly known as metadata.
  • the producer marks the clip with insertion points.
  • the invention contemplates the use of various techniques for marking insertion point.
  • the system may, for instance, embed a signal having an identifiable pattern to mark a particular location in a master clip of other type of media clip. The signal is checked for when the system is looking for a location to place an insert clip.
  • Other approaches involve defining location information and storing the location information along with the media clips (e.g., in a database system) in the form of metadata associated with the clip.
  • the system may utilize a plurality of master clips that each begin and/or end at the point where an insert clip is to be placed.
  • the master clips are merged together with one or more appropriate insert clips the result is a seamless media clip ready for playback.
  • a song or some other type of recorded information is split into a set of compressed or uncompressed sequential files (e.g., WAV, AVI, MP3, OGG, etc).
  • certain files are identified as insert files, the voice track is removed from the insert files, and an insert clip is recorded over the insert file. This allows for the appearance of an original recording since the background music continues to play along while a vocally personalized or context associated phrase is inserted into the media clip.
  • the producer can create the insert clip by simply adding the appropriate voice data to the clip. In either case the master clips and insert clips are then merged together to create a finalized media clip.
  • the system may generate the media clip on the fly by integrating the appropriate master clips and insert clips together, or it may retrieve a previously created media clip from the database.
  • the producer of a media clip may define mixing and insertion properties.
  • the system may use such properties to define the way an insert clip is merged together with one or more master clips. For instance, properties may enable the system to know when to fade the master clip signal to allow for seamless integration of an insert clip and slowly return to normal after the insert clip completes.
  • the markings indicating the split and merge locations may be embedded codes or metadata stored separate from the clip.
  • the applications generated with embodiments of the invention reflect the flow of natural language. This is accomplished when a creator of the application writes at least one "generic" filler for every slot in the application and/or provides an alphabetic set of "generic" fillers for slots with highly variable information (e.g. name) and accounts for phonemic blending that occurs across closely annunciated phrases. If the writer has determined that certain phonemic blends will produce a "dirty" post-produced result, and has decided to include the content preceding and following a slot, the studio guide should incorporate the preceding and following content in every read of the variable. A "dirty" blend occurs whenever a phoneme (vowel or consonant utterance) crosses into the following word in natural language.
  • FIG. 5 illustrates a flow chart showing the interactions of the system components for a telephonic ring tone and/or ringback tone embodiments of the invention.
  • a user selects personalized content at 500, this is shown in the block as a client computer on the left associated with a user visiting an operator portal such as a telephone company portal and selecting personalized content.
  • the operator portal communicates with the media server at 501 to determine what personalized applications are available and to obtain the associated variable names and to supply or receive any other information required, such as for example the bit rates desired for delivery to the specific target telephonic client device such as a particular cell phone.
  • the user configures the ring tone or ringback tone at 502, this can involve selecting a particular celebrity voice to use for the personalized clip and filling out forms to account for the required insert clips, for example recipient name, caller name or any other variables required for generation of the resulting clip.
  • the user interacts with the operator portal in this step.
  • the user previews the clip at 503 when the operator portal redirects the user to a preview window to allow the user to access the clip from the media server.
  • the user confirms the transaction at 504 via the operator portal.
  • the operator portal retrieves the finished clip or clips and uses the clip for a ring tone, ringback tone or answering message as appropriate.
  • An example of an XML embodiment of an application is a single document may be utilized to describe the structure of the audio file, clip or clips to be delivered.
  • the application XML is described in the table below:
  • the transcript for the sample application is:
  • the integrator would issue the following HTTP request:
  • a media server or media generation engine may comprise many different types of hardware.
  • An example of the type of hardware configuration may comprise Dell PowerEdge 2400 servers each with dual Pentium III Xeon processors with 512K L2 cache. Each server may be configured with IGB of main memory and 42GB (6x7GB) storage configured with software RAID 0+1. Alternatively the system may be deployed on higher density blade servers. Logically, systems that employ MPEG 3 encoding gain substantial performance benefits from faster processor speeds. Embodiments using seamless splicing of compressed formats may server higher numbers of users since the processing requirements of such implementations is significantly lower. This is due to the fact that encoding the output data does not have to occur on the entire output media clip as when raw insert clips are added to a raw master clip.
  • Some compressed formats allow frames to be inserted in the middle of other frames without altering portions of the preceding or succeeding frames.
  • These compression formats can be used in order to pre-encode master clips and pre-encode insert clips before splicing them together. This optimization can yield a two order of magnitude increase in numbers of users serviced per second versus a non-cached raw master and raw insert clip splice methodology and subsequent compression and network transmission.
  • Embodiments of the invention utilizing scalable architectures may transcode and/or compress the media clips resulting in a great reduction in network load offered by such compression.
  • the system may perform WAV to MPEG 3 transcoding using LAME or any other encoder capable of compressing data into formats required for output by embodiments of the invention. While this scheme dramatically increases audio quality and/or reduces network demand by a dramatic ratio (10:1), transcoding and compression place heavy load on the media generation engine or media server in the case of context clip blending.
  • Another embodiment of the invention utilizes an encoder with settings designed to allow for seamless splicing of compressed media. This eliminates the need for a layer of compression engines dedicated to compressing the media clips and creates a system that is approximately 300 times faster than a brute force WAVE to MPEG-3 media clip cache-less personalization system.
  • Seamlessly splicing media clips may be performed for certain media types.
  • Raw data types such as WAV, AIFF and AU format files are ordered in time without borrowing bits from preceding or succeeding frames and therefore may be sliced out and added in with prity.
  • Highly compressed formats may or may not allow for this type of manipulation of individual frames of data since highly compressed formats generally place data in easy-to-compress frames representing simple waveforms that should belong in a hard-to-compress frame. This interlacing of data makes the frames dependent upon one another.
  • MPEG-3 allows for compression with slight degradation of high end frequency spectrum by encoding frames to hold information only for the current frame.
  • the LAME encoder software produces various blank spots on the front and end of encoded clips due to algorithms used in order to decode the clips.
  • Certain encoders use MDCT/filterbank routines functionally similar to decoder routines and leave 528 sample delays at the front of encoded files.
  • seamless splice media clips may be created by clipping the first granule (576 bits) of the encoding insert clip encoding using LAME software which contains MDCT coefficients and eliminating the ID3 metadata from the file and the last 288 bits at the end of the insert clip.
  • the resulting media clip contains no front or back-end artifacts, metadata or data dependencies to hinder the independent insertion into a master clip.
  • a user utilizes the system embodying the invention to send customized messages (e.g., an invitation, advertisement, reminder, etc..) to one or more other users (e.g., recipients).
  • a user may connect to a server and input a list of other users who are to receive the customized message.
  • the sending user may select a master clip for distribution and the system assembles a multimedia clip for distribution using the list of user information to identify an appropriate insert clip(s) to merge with the master clip.
  • the system is also capable of retrieving context information to determine the best communication path to reach the recipient and/or the recipient's availability.
  • the system may obtain other context information such as availability information, personal information (e.g. address and phone number), and any other context information useful for purposes of assembling and disseminating the multimedia clip.
  • the system utilizes the context information in several ways. For example, the system may send messages at different times depending on the distance between the residence location of each recipient and the location of the meeting. The system may also send the message using different transport mechanisms depending upon the whereabouts of the recipient.
  • the system may elect to email the message. Otherwise the system may opt to transmit an audio message to a voicemail system or to contact the user by making a cellular phone call.
  • the system retrieves recipient information (e.g. first and last names, his/her title etc.), and assembles a multimedia clip appropriate for each user.
  • recipient information e.g. first and last names, his/her title etc.
  • the system may for instance, alter the language, gender, tone, or any other modifiable aspects of the voice track depending upon the characteristics of the user.
  • the system may also select an appropriate mechanism and format for the multimedia clip and thereby produces a multimedia clip specific to each user.
  • the personalization variables or insert variables are mapped to existing insert clips.
  • an application that comprises a name variable and the application enables a user to type in a name for a person in which to personalize a message
  • non-existent names are mapped to the closest possible name.
  • a personalized message for a user name "Joseph” may map to "Joe”.
  • decency standards may be applied for example when a user types in (or says) a word that is inappropriate for the insert clip, for example a curse word may be mapped to an acceptable word that is not offensive.
  • Using 1300 names in the United States covers 80% of the population, so mapping all of the rest of the names to a generic name such as "sir" or "madame” for names that are not found yields 80% coverage for a small processing price.
  • embodiments of the invention may be utilized by cell phone companies to obtain ring tones and ring back tones along with other personalized media, such as button vocalizations in a celebrity voice for example.
  • generation of personalized media occurs when a user purchases a personalized ringback tone for example, the generation of the media clip for a given incoming phone number for a given caller id may occur at any time before the clip is actually used.
  • the revenue generated from the sale of the personalized media clip comes from the telephone company which may charge their clients a different fee, or a fee for each use of the generated media clip.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Television Signal Processing For Recording (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Un support personnalisé est généré par obtention d'un clip maître (201) possédant des points d'insertion prédéfinis (203), par obtention d'un clip d'insertion (202), fusionnant de façon uniforme le clip d'insertion dans le clip maître sélectionné afin de générer un clip support personnalisé (204) avec des transitions indétectables entre les raccords de clip. Un clip d'insertion (202) est utilisé afin d'ajouter des variables telles que le nom, le lieu, l'heure, le genre, le nom du produit et d'autres informations désirées d'un clip maître. Des données de contexte dynamique peuvent être ajoutées et transmises à un dispositif de lecture client. Ce procédé de distribution peut se faire en temps réel ou de façon asynchrone jusqu'à ce qu'on le demande. Le dispositif de lecture peut utiliser le clip pour la sonnerie, la tonalité de transfert, le message vocal ou dans n'importe quel autre but. Même si les contenus du clip maître et/ou du clip d'insertion peut utiliser n'importe quel voix notamment la voix de son créateur, des voix de célébrités ou des voix d'imitateurs de célébrités utilisées dans le clip maître et dans le clip d'insertion est fusionnée de façon uniforme au moyen de données compressées ou non dans des formats donnés.
EP05711731A 2005-01-20 2005-01-20 Systeme et procede destines a generer et a repartir des supports personnalises Withdrawn EP1847102A4 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2005/001853 WO2006078246A1 (fr) 2005-01-20 2005-01-20 Systeme et procede destines a generer et a repartir des supports personnalises

Publications (2)

Publication Number Publication Date
EP1847102A1 true EP1847102A1 (fr) 2007-10-24
EP1847102A4 EP1847102A4 (fr) 2009-04-08

Family

ID=36692538

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05711731A Withdrawn EP1847102A4 (fr) 2005-01-20 2005-01-20 Systeme et procede destines a generer et a repartir des supports personnalises

Country Status (3)

Country Link
EP (1) EP1847102A4 (fr)
JP (1) JP2008529345A (fr)
WO (1) WO2006078246A1 (fr)

Families Citing this family (118)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
JP4780679B2 (ja) * 2008-05-12 2011-09-28 カタリスト・モバイル株式会社 移動式小型通信機器及びプログラム
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US20120311585A1 (en) 2011-06-03 2012-12-06 Apple Inc. Organizing task items that represent tasks to perform
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
DE112011100329T5 (de) 2010-01-25 2012-10-31 Andrew Peter Nelson Jerram Vorrichtungen, Verfahren und Systeme für eine Digitalkonversationsmanagementplattform
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
GB2481573A (en) 2010-06-15 2012-01-04 Nds Ltd Splicng of encoded media content
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
DE212014000045U1 (de) 2013-02-07 2015-09-24 Apple Inc. Sprach-Trigger für einen digitalen Assistenten
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
WO2014144949A2 (fr) 2013-03-15 2014-09-18 Apple Inc. Entraînement d'un système à commande au moins partiellement vocale
WO2014144579A1 (fr) 2013-03-15 2014-09-18 Apple Inc. Système et procédé pour mettre à jour un modèle de reconnaissance de parole adaptatif
WO2014197336A1 (fr) 2013-06-07 2014-12-11 Apple Inc. Système et procédé pour détecter des erreurs dans des interactions avec un assistant numérique utilisant la voix
WO2014197334A2 (fr) 2013-06-07 2014-12-11 Apple Inc. Système et procédé destinés à une prononciation de mots spécifiée par l'utilisateur dans la synthèse et la reconnaissance de la parole
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197335A1 (fr) 2013-06-08 2014-12-11 Apple Inc. Interprétation et action sur des commandes qui impliquent un partage d'informations avec des dispositifs distants
DE112014002747T5 (de) 2013-06-09 2016-03-03 Apple Inc. Vorrichtung, Verfahren und grafische Benutzerschnittstelle zum Ermöglichen einer Konversationspersistenz über zwei oder mehr Instanzen eines digitalen Assistenten
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
CN105265005B (zh) 2013-06-13 2019-09-17 苹果公司 用于由语音命令发起的紧急呼叫的系统和方法
AU2014306221B2 (en) 2013-08-06 2017-04-06 Apple Inc. Auto-activating smart responses based on activities from remote devices
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
EP3149728B1 (fr) 2014-05-30 2019-01-16 Apple Inc. Procédé d'entrée à simple énoncé multi-commande
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9606986B2 (en) 2014-09-29 2017-03-28 Apple Inc. Integrated word N-gram and class M-gram language models
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179588B1 (en) 2016-06-09 2019-02-22 Apple Inc. INTELLIGENT AUTOMATED ASSISTANT IN A HOME ENVIRONMENT
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK179549B1 (en) 2017-05-16 2019-02-12 Apple Inc. FAR-FIELD EXTENSION FOR DIGITAL ASSISTANT SERVICES
US10419790B2 (en) 2018-01-19 2019-09-17 Infinite Designs, LLC System and method for video curation
CN110381356B (zh) * 2019-07-19 2022-06-07 北京字节跳动网络技术有限公司 音视频生成方法、装置、电子设备及可读介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
GB2395866A (en) * 2002-11-28 2004-06-02 Edward O'connor Mobile telephony enabled handset audio message application

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5821874A (en) * 1996-03-19 1998-10-13 Sony Corporation Messaging terminal with voice notification
US20020001371A1 (en) * 1998-05-29 2002-01-03 Randy G. Goldberg Method and apparatus for delivering personalized broadcast messages
JP2001119473A (ja) * 1999-10-20 2001-04-27 Shimadzu Business Systems Corp メッセージ発信システム
US20020138302A1 (en) * 2001-03-21 2002-09-26 David Bodnick Prepaid telecommunication card for health care compliance
JP2003110655A (ja) * 2001-09-28 2003-04-11 Brother Ind Ltd 音声合成機能付き電話装置
FR2845235A1 (fr) * 2002-10-01 2004-04-02 Cegetel Procede de messagerie vocale instantanee et dispositif de mise en oeuvre d'un tel procede

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
GB2395866A (en) * 2002-11-28 2004-06-02 Edward O'connor Mobile telephony enabled handset audio message application

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2006078246A1 *

Also Published As

Publication number Publication date
JP2008529345A (ja) 2008-07-31
WO2006078246A1 (fr) 2006-07-27
EP1847102A4 (fr) 2009-04-08

Similar Documents

Publication Publication Date Title
US7142645B2 (en) System and method for generating and distributing personalized media
EP1847102A1 (fr) Systeme et procede destines a generer et a repartir des supports personnalises
US7123696B2 (en) Method and apparatus for generating and distributing personalized media clips
US20060136556A1 (en) Systems and methods for personalizing audio data
US7761293B2 (en) Spoken mobile engine
US6965770B2 (en) Dynamic content delivery responsive to user requests
US8086464B2 (en) System for low-latency animation of talking heads
US20030028380A1 (en) Speech system
US7277855B1 (en) Personalized text-to-speech services
US9318100B2 (en) Supplementing audio recorded in a media file
US7408106B2 (en) Tele-karaoke
US11463772B1 (en) Selecting advertisements for media programs by matching brands to creators
JP2003521750A (ja) スピーチシステム
WO2006076516A2 (fr) Fourniture personnalisable d'information audio
WO2008001500A1 (fr) Système de génération de contenus audio, système d'échange d'informations, programme, procédé de génération de contenus audio et procédé d'échange d'informations
US20070208564A1 (en) Telephone based search system
CN101557483A (zh) 产生媒体节目的方法与系统
WO2007002448A1 (fr) Appareil, systeme, procede et article utilises dans la transformation et la generation automatique de donnees multimedia basees sur un contexte
WO2002009088A2 (fr) Systeme de disc jockey personnalise
US8219402B2 (en) Asynchronous receipt of information from a user
WO2007147334A1 (fr) Procédé de conversion d'une information textuelle en un flux de média ou de multimédia destinée à un terminal
KR20060008237A (ko) 유무선 네트워크를 통한 개인화 주문형 음악 서비스 시스템및 그 방법
WO2003063133A1 (fr) Personnalisation de la presentation sonore de messages synthetises dans un terminal
AU2989301A (en) Speech system
WO2010132806A2 (fr) Système et procédé de diffusion à la demande d'un contenu sonore utilisable avec des oeuvres de divertissement

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070820

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20090309

RIC1 Information provided on ipc code assigned before grant

Ipc: H04M 19/04 20060101ALI20090303BHEP

Ipc: H04M 1/64 20060101AFI20060801BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20090606