WO2007132286A1 - Interface utilisateur adaptative - Google Patents

Interface utilisateur adaptative Download PDF

Info

Publication number
WO2007132286A1
WO2007132286A1 PCT/IB2006/001932 IB2006001932W WO2007132286A1 WO 2007132286 A1 WO2007132286 A1 WO 2007132286A1 IB 2006001932 W IB2006001932 W IB 2006001932W WO 2007132286 A1 WO2007132286 A1 WO 2007132286A1
Authority
WO
WIPO (PCT)
Prior art keywords
music
user interface
data structure
audible
graphical user
Prior art date
Application number
PCT/IB2006/001932
Other languages
English (en)
Other versions
WO2007132286A8 (fr
Inventor
Timo Kosonen
Kai Havukainen
Jukka Holm
Antti Eronen
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/IB2006/001932 priority Critical patent/WO2007132286A1/fr
Priority to CA2650612A priority patent/CA2650612C/fr
Priority to US12/227,313 priority patent/US20090307594A1/en
Priority to TW096110343A priority patent/TWI433027B/zh
Publication of WO2007132286A1 publication Critical patent/WO2007132286A1/fr
Publication of WO2007132286A8 publication Critical patent/WO2007132286A8/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/076Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of timing, tempo; Beat detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/005Non-interactive screen display of musical or status data
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/005Device type or category
    • G10H2230/015PDA [personal digital assistant] or palmtop computing devices used for musical purposes, e.g. portable music players, tablet computers, e-readers or smart phones in which mobile telephony functions need not be used
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/005Device type or category
    • G10H2230/021Mobile ringtone, i.e. generation, transmission, conversion or downloading of ringing tones or other sounds for mobile telephony; Special musical data formats or protocols therefor

Definitions

  • Embodiments of the present invention relate to an adaptive user interface.
  • some embodiments relate to methods, systems, devices and computer programs for changing an appearance of a graphical user interface in response to music.
  • Such devices typically have a user interface that enables a user of the device to control the device.
  • Some devices have a graphical user interface (GUI).
  • GUI graphical user interface
  • Digital music is a growth business, but it is extremely competitive. It would therefore be desirable to increase the value associated with digital music and/or digital music player so that they are more desirable and consequently more valuable.
  • a method comprising: obtaining music information that defines at least one characteristic of audible music; and controlling changes to an appearance of a graphical user interface using the music information.
  • a system comprising: a display for providing a graphical user interface; and a processor operable to obtain music information that defines at least one characteristic of audible music and operable to control changes to an appearance of the graphical user interface using the music information while the music is audible.
  • a computer program for obtaining music information that defines at least one characteristic of audible music; and controlling changes to an appearance of a graphical user interface using the music information.
  • a method comprising: storing a data structure that defines at least how a graphical user interface changes and changing with successive beats of audible music, the appearance of the graphical user interface using the data structure.
  • FIG. 1 schematically illustrates a system for controlling a graphical user interface (GUI) ;
  • Figs 2A, 2B and 2C illustrate a GUI that changes appearance in response to the tempo of the beats in audible music
  • Fig. 3A and Fig 3B illustrates how a size of a graphical menu item may vary when the audible music has, respectively, a slow tempo and a faster tempo; and Fig. 4 illustrates a method of generating a GUI that changes in response to audible music.
  • Fig 1 schematically illustrates a system 10 for controlling a graphical user interface (GUI).
  • the system comprises: a processor 2, a display 4, a user input device 6 and a memory 12 storing computer program instructions 14, and a GUI database 16.
  • the processor 2 is arranged to write to and read from the memory 4 and to control the output of the display 8. It receives user input commands from the user input device 6.
  • the computer program instructions 6 define a graphical user interface software application.
  • the computer program instructions 6, when loaded into the processor 2, provide the logic and routines that enables the system 10 to perform the method illustrated in Figs 2, 3 and/or 4.
  • the computer program instructions 6 may arrive at the electronic device via an electromagnetic carrier signal or be copied from a physical entity 1 such as a computer program product, a memory device or a record medium such as a CD- ROM or DVD.
  • a physical entity 1 such as a computer program product, a memory device or a record medium such as a CD- ROM or DVD.
  • the system 10 will typically be part of an electronic device such as a personal digital assistant, a personal computer, a mobile cellular telephone, a personal music player etc.
  • the system 10 may also be used as a music player.
  • a music track may be stored in the memory 4.
  • Computer program instructions when loaded into the processor 2, enable the functionality of a music player as is well known in the art.
  • the music player processes the music track and produces an audio control signal which is provided to an audio output device 8 to play the music.
  • the audio output device may be, for example, a loudspeaker or a jack for headphones.
  • the music player is responsible for the audio playback, i.e., it reads the music track and renders it to audio.
  • Figs 2A, 2B and 2C illustrate a GUI 20 that changes appearance in response to and in time with the tempo of the beats in audible music.
  • the GUI 20 comprises graphical items such as a background 22, a battery life indicator 24 and a number of graphical menu items 26A, 26B, 26C and 26D.
  • the Figs 2A to 2C illustrates images of a GUI 20 captured sequentially while the appearance of the GUI changes in response to and in synchronisation with the tempo of the audible music.
  • the graphical menu item 26A is animated. It pulsates in size with the beat of the music.
  • the graphical menu item 26A has the same size S1 in Figs 2A and 2C but has an increased size S2 in Fig 2B.
  • the Figs 2A and 2C illustrate the graphical menu item 26A at its minimum size S1 and the Fig 2C illustrates it at its maximum size S2.
  • Fig. 3A illustrates how the size of the graphical menu item 26A varies when the audible music has a slow tempo.
  • Fig. 3B illustrates how the size of the graphical menu item 26A varies when the audible music has a faster tempo.
  • the GUI database 12 stores a plurality of independent GUI models as independent data structures 13.
  • a GUI model defines a particular GUI 20 and, if the GUI 20 adapts automatically to audible music, it defines how the GUI adapts with musical time.
  • the adaptable GUI illustrated in Figs 2 and 3 would be defined by a single GUI model.
  • This model would define what aspects of the GUI 20 change in musical time.
  • the graphical symbol 26A varies between a size S1 and S2 with the tempo of the music.
  • a GUI model for an automatically adaptable GUI consequently defines an ordered sequence of GUI configurations that are adopted at a rate determined by the beat of the music.
  • a configuration is the collection of the graphical items forming the GUI 20 and their visual attributes.
  • the GUI model defines how the graphical items and their visual attributes change with musical time.
  • the graphical items will be different for each GUI 20, but may include, for example, indicators (e.g. battery life remaining, received signal strength, volume, etc), items (such as menu entries, icons or buttons) for selection by a user, a background and images.
  • indicators e.g. battery life remaining, received signal strength, volume, etc
  • items such as menu entries, icons or buttons
  • the visual attributes may include one or more of: the position(s) of one or more graphical items; the size(s) of one or more graphical items; the shape(s) of one or more graphical items; the color of one or more graphical items; a color palette; the animation of one or more graphical items such as the fluttering of a graphical menu item like a flag in time with the music.
  • FIGs 2 and 3 are simple examples provided for the purpose of illustrating the concept of embodiments of the invention and that other implementation may be significantly different and/or more complex.
  • the background may fade in and out with the tempo of the music and/or the color palette used for the graphical user interface may vary with the tempo of the music.
  • Fig. 4 illustrates a method of generating a GUI that changes in response to audible music.
  • the selection of the current GUI model is schematically illustrated at block 50 in Fig. 4.
  • the selection may be based upon current context information 60.
  • the context information may be, for example, a user input command 62 that selects or specifies the current GUI model.
  • the selection may be alternatively automatic, that is, without user intervention.
  • the context information may be, for example, music information such as metadata 64 provided with the music track that is being played or derived by processing the audible music.
  • This metadata may indicate characteristics of the music such as, for example, the music genre, keywords from the lyrics, time signature, mood (danceable, romantic) etc.
  • the automatic selection of the current GUI mode may be based on the metadata.
  • the context information may be, for example, environmental music information that is detected from radio or sound waves in the environment of the system 10.
  • it may be metadata derived by processing ambient audible music detected via a microphone 66.
  • This metadata may indicate characteristics of the music such as, for example, the music genre, keywords from the lyrics detected using voice recognition, time signature etc.
  • the automatic selection of the current GUI model may be based on the metadata.
  • music information that is dependent upon a characteristic of the music is obtained.
  • the tempo is typically in the form of beats per minute.
  • the music tempo may be provided with the music track as metadata, derived from the music or input by the user. Derivation of the music tempo is suitable when the music is produced from a stored music track and also when the music is ambient music produced by a third party.
  • the tempo information can be derived automatically using digital signal processing techniques.
  • the processor 2 uses the music tempo obtained in step 54 and the current GUI model to control the GUI 20 displayed on display 4.
  • the GUI 40 changes its appearance in time with the audible music.
  • the appearance of the GUI may be changed with successive beats of the audible music in a manner defined by the current GUI model.
  • Each GUI model data structure 13 may be transferable independently into and out of the database 12.
  • a data structure 13 can, for example, be downloaded from a web- site, uploaded to a website, transferred from one device or storage device to another etc.
  • Each GUI model data structure 13 and therefore each GUI model is therefore independently portable.
  • a common standard model may be used as a basis for each GUI model. That is, there is a semantic convention for specifying the GUI attributes.
  • a new GUI model can be created by a user by creating a new GUI model data structure 13 and storing it in the GUI model database 12.
  • an existing GUI model may be varied by editing the existing GUI model data structure 13 for that GUI model and saving the new data structure in the GUI model database 12.
  • a GUI model data structure 13 for use with a music track may be provided with that music track.
  • information other than the tempo of the music track can be obtained.
  • This may include for example the pitch, which can be accomplished using methods presented in the literature, e.g. A. de Cheveigne and H. Kawahara, "YIN, a fundamental frequency estimator for speech and music," J. Aco ⁇ st. Soc. Am., vol. 111 , pp. 1917-1930, April 2002, or Matti P. Ryynanen and Anssi Klapuri: “POLYPHONIC MUSIC TRANSCRIPTION USING NOTE EVENT MODELING", Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 16-19, 2005, New Paltz, New York.
  • the color of a GUI element may be adapted according to the pitch, e.g. such that the color changes from blue to red when the pitch of the music increases.
  • a filter bank may be used to divide the music spectrum into N bands, and analyze the energy in each band.
  • the energies and energy changes in different bands can be detected and produced as musical information for use at step 54.
  • the spectrum can be divided into three bands and the energies in each can be used to control the amount of red, blue, and green color in a GUI element or background.
  • the musical information may identify different instruments. Essid, Richard, David, "Instrument Recognition in polyphonic music", In Proc. IEEE Int. Conference on
  • Acoustics, Speech, and Signal Processing 2005 provides a method for recognizing the presence of different musical instruments. For example, detecting the presence of an electric guitar may make an Ul element ripple, creating an illusion as if the distortion of the guitar sound would distort the graphical element.
  • the musical information may identify music harmony and tonality: Gomez, Herrera: “Automatic Extraction of Tonal Metadata from Polyphonic Audio Recordings", AES 25th International Conference, London, United Kingdom, 2004 June 17-19, provides a method for identifying music harmony and tonality.
  • the GUI model might define that certain chords of the music are mapped to different colors.
  • the GUI could also be adapted according to the characteristics of the sound coming from the microphone.
  • the GUI elements can be made to ripple according to the volume of the sound recorded with the microphone.
  • the loud noises can e.g. cause the GUI elements to ripple.
  • the music player of the device is not playing anything, but the device just analyzes the incoming audio being recorded with the microphone, and uses the audio characteristics to control the appearance of the GUI items.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Stereophonic System (AREA)
  • Auxiliary Devices For Music (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un procédé consistant à obtenir des données audio qui définissent au moins une caractéristique d'un fichier musical audible et à commander des modifications de l'apparence d'une interface utilisateur graphique à l'aide des données audio.
PCT/IB2006/001932 2006-05-12 2006-05-12 Interface utilisateur adaptative WO2007132286A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/IB2006/001932 WO2007132286A1 (fr) 2006-05-12 2006-05-12 Interface utilisateur adaptative
CA2650612A CA2650612C (fr) 2006-05-12 2006-05-12 Interface utilisateur adaptative
US12/227,313 US20090307594A1 (en) 2006-05-12 2006-05-12 Adaptive User Interface
TW096110343A TWI433027B (zh) 2006-05-12 2007-03-26 適應性使用者介面

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2006/001932 WO2007132286A1 (fr) 2006-05-12 2006-05-12 Interface utilisateur adaptative

Publications (2)

Publication Number Publication Date
WO2007132286A1 true WO2007132286A1 (fr) 2007-11-22
WO2007132286A8 WO2007132286A8 (fr) 2008-03-06

Family

ID=38693591

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/001932 WO2007132286A1 (fr) 2006-05-12 2006-05-12 Interface utilisateur adaptative

Country Status (4)

Country Link
US (1) US20090307594A1 (fr)
CA (1) CA2650612C (fr)
TW (1) TWI433027B (fr)
WO (1) WO2007132286A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110042684A1 (en) * 2008-04-17 2011-02-24 Sumitomo Electric Industries, Ltd. Method of Growing AlN Crystals, and AlN Laminate

Families Citing this family (102)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US8930002B2 (en) * 2006-10-11 2015-01-06 Core Wireless Licensing S.A.R.L. Mobile communication terminal and method therefor
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US20100255827A1 (en) * 2009-04-03 2010-10-07 Ubiquity Holdings On the Go Karaoke
DE102009037687A1 (de) * 2009-08-18 2011-02-24 Sennheiser Electronic Gmbh & Co. Kg Mikrofoneinheit, Taschensender und drahtloses Audiosystem
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
JP5962038B2 (ja) * 2012-02-03 2016-08-03 ソニー株式会社 信号処理装置、信号処理方法、プログラム、信号処理システムおよび通信端末
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
KR20240132105A (ko) 2013-02-07 2024-09-02 애플 인크. 디지털 어시스턴트를 위한 음성 트리거
WO2014197335A1 (fr) 2013-06-08 2014-12-11 Apple Inc. Interprétation et action sur des commandes qui impliquent un partage d'informations avec des dispositifs distants
KR101772152B1 (ko) 2013-06-09 2017-08-28 애플 인크. 디지털 어시스턴트의 둘 이상의 인스턴스들에 걸친 대화 지속성을 가능하게 하기 위한 디바이스, 방법 및 그래픽 사용자 인터페이스
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
CN110797019B (zh) 2014-05-30 2023-08-29 苹果公司 多命令单一话语输入方法
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. USER INTERFACE FOR CORRECTING RECOGNITION ERRORS
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770428A1 (en) 2017-05-12 2019-02-18 Apple Inc. LOW-LATENCY INTELLIGENT AUTOMATED ASSISTANT
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
DK179549B1 (en) 2017-05-16 2019-02-12 Apple Inc. FAR-FIELD EXTENSION FOR DIGITAL ASSISTANT SERVICES
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US20180336275A1 (en) 2017-05-16 2018-11-22 Apple Inc. Intelligent automated assistant for media exploration
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
DK179822B1 (da) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. VIRTUAL ASSISTANT OPERATION IN MULTI-DEVICE ENVIRONMENTS
US11076039B2 (en) 2018-06-03 2021-07-27 Apple Inc. Accelerated task performance
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. USER ACTIVITY SHORTCUT SUGGESTIONS
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
WO2021056255A1 (fr) 2019-09-25 2021-04-01 Apple Inc. Détection de texte à l'aide d'estimateurs de géométrie globale

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5286908A (en) * 1991-04-30 1994-02-15 Stanley Jungleib Multi-media system including bi-directional music-to-graphic display interface
JP2004064191A (ja) * 2002-07-25 2004-02-26 Matsushita Electric Ind Co Ltd 通信端末装置
EP1414019A2 (fr) * 2002-10-22 2004-04-28 Rohm Co., Ltd. Système pour former des données de mélodie et d'image synchrones, et système pour produire une mélodie et une image de manière synchrone
WO2004064036A1 (fr) * 2003-01-07 2004-07-29 Madwaves Ltd. Systemes et procedes pour creer, modifier, ecouter des oeuvres musicales et interagir avec ces dernieres
US20050070241A1 (en) * 2003-09-30 2005-03-31 Northcutt John W. Method and apparatus to synchronize multi-media events
US6898759B1 (en) * 1997-12-02 2005-05-24 Yamaha Corporation System of generating motion picture responsive to music
EP1583335A1 (fr) * 2004-04-02 2005-10-05 Sony Ericsson Mobile Communications AB Détection de rythmes dans des téléphones mobiles

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5388163A (en) * 1991-12-23 1995-02-07 At&T Corp. Electret transducer array and fabrication technique
US6490359B1 (en) * 1992-04-27 2002-12-03 David A. Gibson Method and apparatus for using visual images to mix sound
JP3248981B2 (ja) * 1992-06-02 2002-01-21 松下電器産業株式会社 計算機
US5898759A (en) * 1996-08-27 1999-04-27 Chaw Khong Technology Co., Ltd. Telephone answering machine with on-line switch function
EP1947858B1 (fr) * 2000-10-11 2014-07-02 United Video Properties, Inc. Systémes et procédés d'apport de media sur demande
US20040201603A1 (en) * 2003-02-14 2004-10-14 Dan Kalish Method of creating skin images for mobile phones
DE10313330B4 (de) * 2003-03-25 2005-04-14 Siemens Audiologische Technik Gmbh Verfahren zur Unterdrückung mindestens eines akustischen Störsignals und Vorrichtung zur Durchführung des Verfahrens
US7208669B2 (en) * 2003-08-25 2007-04-24 Blue Street Studios, Inc. Video game system and method
US20050250438A1 (en) * 2004-05-07 2005-11-10 Mikko Makipaa Method for enhancing communication, a terminal and a telecommunication system
JP4641217B2 (ja) * 2005-06-08 2011-03-02 株式会社豊田中央研究所 マイクロホンとその製造方法
WO2007024909A1 (fr) * 2005-08-23 2007-03-01 Analog Devices, Inc. Systeme multi-microphones
DE102006004287A1 (de) * 2006-01-31 2007-08-02 Robert Bosch Gmbh Mikromechanisches Bauelement und entsprechendes Herstellungsverfahren

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5286908A (en) * 1991-04-30 1994-02-15 Stanley Jungleib Multi-media system including bi-directional music-to-graphic display interface
US6898759B1 (en) * 1997-12-02 2005-05-24 Yamaha Corporation System of generating motion picture responsive to music
JP2004064191A (ja) * 2002-07-25 2004-02-26 Matsushita Electric Ind Co Ltd 通信端末装置
EP1414019A2 (fr) * 2002-10-22 2004-04-28 Rohm Co., Ltd. Système pour former des données de mélodie et d'image synchrones, et système pour produire une mélodie et une image de manière synchrone
WO2004064036A1 (fr) * 2003-01-07 2004-07-29 Madwaves Ltd. Systemes et procedes pour creer, modifier, ecouter des oeuvres musicales et interagir avec ces dernieres
US20050070241A1 (en) * 2003-09-30 2005-03-31 Northcutt John W. Method and apparatus to synchronize multi-media events
EP1583335A1 (fr) * 2004-04-02 2005-10-05 Sony Ericsson Mobile Communications AB Détection de rythmes dans des téléphones mobiles

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DATABASE WPI Week 200418, Derwent World Patents Index; Class P86, AN 2004-187917 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110042684A1 (en) * 2008-04-17 2011-02-24 Sumitomo Electric Industries, Ltd. Method of Growing AlN Crystals, and AlN Laminate

Also Published As

Publication number Publication date
CA2650612C (fr) 2012-08-07
TWI433027B (zh) 2014-04-01
TW200802066A (en) 2008-01-01
WO2007132286A8 (fr) 2008-03-06
CA2650612A1 (fr) 2007-11-22
US20090307594A1 (en) 2009-12-10

Similar Documents

Publication Publication Date Title
CA2650612C (fr) Interface utilisateur adaptative
EP1736961B1 (fr) Système et procédé de création de sonnerie améliorées pour téléphone mobile
JP4640463B2 (ja) 再生装置、表示方法および表示プログラム
CN109615682A (zh) 动画生成方法、装置、电子设备及计算机可读存储介质
MX2011012749A (es) Sistema y metodo para recibir, analizar y editar audio para crear composiciones musicales.
JP2009123124A (ja) 楽曲検索システム及び方法並びにそのプログラム
US20070297292A1 (en) Method, computer program product and device providing variable alarm noises
JP4375810B1 (ja) カラオケホスト装置及びプログラム
CN110211556A (zh) 音乐文件的处理方法、装置、终端及存储介质
US20090067605A1 (en) Video Sequence for a Musical Alert
CN106205571A (zh) 一种歌声语音的处理方法和装置
CN110223677A (zh) 空间音频信号滤波
CN113781989B (zh) 一种音频的动画播放、节奏卡点识别方法及相关装置
JP2007271977A (ja) 評価基準判定装置、制御方法及びプログラム
WO2023273440A1 (fr) Procédé et appareil pour générer une pluralité d'effets sonores, et dispositif terminal
CN101370216B (zh) 一种手机音频文件的情绪化处理和播放方法
JP2007256619A (ja) 評価装置、制御方法及びプログラム
JP4839967B2 (ja) 指導装置及びプログラム
KR100468971B1 (ko) 멜로디 기반 검색이 가능한 음악 재생장치
CN104869233B (zh) 一种录音方法
CN113345394B (zh) 音频数据的处理方法、装置、电子设备及存储介质
JP5742472B2 (ja) データ検索装置およびプログラム
US20240169962A1 (en) Audio data processing method and apparatus
JP2007233078A (ja) 評価装置、制御方法及びプログラム
Fan et al. The realization of multifunctional guitar effectors&synthesizer based on ADSP-BF533

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06779858

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2650612

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 12227313

Country of ref document: US

122 Ep: pct application non-entry in european phase

Ref document number: 06779858

Country of ref document: EP

Kind code of ref document: A1