EP2798632A1 - Accès direct à une grammaire - Google Patents
Accès direct à une grammaireInfo
- Publication number
- EP2798632A1 EP2798632A1 EP11879105.2A EP11879105A EP2798632A1 EP 2798632 A1 EP2798632 A1 EP 2798632A1 EP 11879105 A EP11879105 A EP 11879105A EP 2798632 A1 EP2798632 A1 EP 2798632A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- input
- user
- vehicle
- function
- control command
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000006870 function Effects 0.000 claims abstract description 164
- 238000000034 method Methods 0.000 claims abstract description 54
- 238000011156 evaluation Methods 0.000 claims description 24
- 230000033001 locomotion Effects 0.000 claims description 18
- 238000004891 communication Methods 0.000 claims description 17
- 230000000977 initiatory effect Effects 0.000 claims 1
- 239000003981 vehicle Substances 0.000 description 64
- 238000012545 processing Methods 0.000 description 43
- 238000010586 diagram Methods 0.000 description 31
- 230000008685 targeting Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 241000894007 species Species 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000005057 finger movement Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012905 input function Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 241000220300 Eupsilia transversa Species 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 101150034459 Parpbp gene Proteins 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000001444 catalytic combustion detection Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005672 electromagnetic field Effects 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000002674 ointment Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/037—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
- B60R16/0373—Voice control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/21—Voice
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0381—Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/041—Indexing scheme relating to G06F3/041 - G06F3/045
- G06F2203/04108—Touchless 2D- digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface without distance measurement in the Z direction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/228—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
Definitions
- aspects of the disclosure relate generally to speech recognition, and more particularly, to the targeting of speech recognition to specific functions associated with a vehicle.
- Speech recognition technology has been increasingly deployed for a variety of purposes, including electronic dictation, voice command recognition, and telephone-based customer service engines. Speech recognition typically involves the processing of acoustic signals that are received via a microphone. In doing so, a speech recognition engine is typical ly utilized to interpret the acoustic signals into words or grammar elements. In certain environments, such as vehicular environments, the use of speech recognition technology enhances safety because drivers are able to provide instructions in a hands-free manner.
- FIG. 1 is a block diagram of an example system or architecture that may be utilized to target speech input to various vehicle functions, according to an example embodiment of the disclosure.
- FIG. 2 is a simplified schematic diagram illustrating example techniques for obtaining user input associated with targeted speech recognition.
- FIG. 3 is a block diagram of an example speech recognition system or architecture that may be utilized in various embodiments of the disclosure.
- FIG. 4 is a flow diagram of an example method for evaluating user input to target speech recognition to a vehicle function.
- FIG. 5 is a flow diagram of an example method for identi fying a gesture associated with the targeting of speech recognition.
- FIG. 6 is a flow diagram of an example method for identifying proximity information associated with the targeting of speech recognition.
- FIG. 7 is a flow diagram of an example method for associating user inputs with grammar elements for speech recognition.
- Embodiments of the disclosure may provide systems, methods, and apparatus for targeting speech recognition to any number of functions associated with a vehicular or other environment.
- a hierarchy of grammar elements associated with a plurality of different functions and/or applications may be avoided, thereby leading to relatively quicker processing of final commands and to a higher level of user satisfaction.
- a subset or cluster of function-specific grammar elements may be associated with each function. For example, a first subset of grammar elements may be associated with a radio function (or other function), and a second subset of grammar elements may be associated with a climate function (or other function).
- a desired function and its associated subset of grammar elements may be selected. The subset of grammar elements may then be utilized to process speech input associated with and targeted to the selected function.
- acoustic models within the vehicle may be optimized for use with specific hardware and various internal and/or external acoustics.
- suitable vehicles include, but are not limited to, cars, trucks, light-duly trucks, heavy-duty trucks, pickup trucks, minivans, crossover vehicles, vans, commercial vehicles, private vehicles, sports utility vehicles, tractor-trailers, aircraft, airplanes, jets, helicopters, space vehicles, watercraft, or any other suitable vehicle with communicative and sensory capability.
- suitable vehicles include, but are not limited to, cars, trucks, light-duly trucks, heavy-duty trucks, pickup trucks, minivans, crossover vehicles, vans, commercial vehicles, private vehicles, sports utility vehicles, tractor-trailers, aircraft, airplanes, jets, helicopters, space vehicles, watercraft, or any other suitable vehicle with communicative and sensory capability.
- embodiments of the disclosure may also be utilized in other transportation or non-transportation related applications where electronic communication between two systems may be implemented.
- a plurality of grammar elements associated with audible commands may be associated with a veh icle.
- the grammar elements may be stored in association with a suitable speech recognition system or component of the vehicle.
- the plurality of grammar elements may include respective grammar elements associated with any number of vehicle functions.
- the vehicle functions may include, for example, a vehicle control function, a climate control function, an audio system function, a window (e.g., windows, sunroof, etc.) control function, a seat control function, a display control function, a navigation control function, a Web or other network function, a communications control function, and/or any other functions associated with a wide variety of vehicle systems, components, and/or applications.
- a subset of the plurality of grammar elements may be associated with each of the vehicle functions.
- a relatively small vocabulary of grammar elements may be associated with each function.
- user input may be identified and evaluated in order to select a desired vehicle function.
- the grammar elements associated with the selected function which may be a subset of the plurality of grammar elements (or which may be separately stored and/or obtained from any number of suitable data sources), may be identified.
- a wide variety of different types of user inputs may be identi fied as desired in various embodiments, including but not limited to, a user gesture, user proximity to an input element, and/or user selection of an input element.
- an image capture device e.g., a camera, etc.
- an object of interest e.g., a user's hand, etc.
- the collected images may be evaluated and/or processed to identify a gesture made by the user.
- a wide variety of different types of gestures may be identified as desired, such as a gesture associated with a hand movement (e.g., complete hand movement, finger movement, etc.) and/or a gesture associated with an indication of (e.g., contact with, proximity to, pointing to, etc.) a defined region of interest within the vehicle.
- a desired function may then be identi fied or selected based at least in part upon an evaluation of the gesture.
- one or more proximity detectors and/or proximity sensors may be uti lized to determine when the user (e.g., a user's hand, etc.) is in proximity to an input element (e.g., a switch, button, knob, input region, etc.), and a desired function may be identified or selected based upon the determined proximity.
- an input element e.g., a switch, button, knob, input region, etc.
- a desired function may be identified or selected based upon the determined proximity.
- a user selection of an input element e.g., a switch, knob, etc.
- a set of grammar elements associated with the function may be utilized to process received audio input, such as speech input. Audio input may be collected by any number of suitable audio capture devices, such as one or more microphones.
- the collection or capture of audio input may be initiated based at least in part upon the identified user input. For example, when an input element selection or gesture is identified (or the onset of a gesture is identified), a microphone may be turned on.
- the identified user input may be utilized to identi fy relevant collected audio input. For example, a buffer may be utilized to store recently collected audio input. Once a user input is identified, audio input captured immediately prior to, during, and/or immediately after the user input may be identified. In either case, the collected audio may be evaluated utilizing the grammar elements associated with the identified function. In this regard, a grammar element (or plurality of grammar elements) or command associated with the function may be identified as corresponding to the collected audio input.
- a grammar element (or plurality of grammar elements) has been identified as matching or otherwise corresponding to the audio input, a wide variety of suitable information may be output, such as an indication of the identified grammar element or a control signal associated with the function. For example, if an audio system function has been identi fied, then an "up" command may be identified and processed in order to turn the volume of the radio up. As another example, i f a window function has been identi fied, then an "up" command may be identi fied and processed in order to roll a window up.
- a user may be permitted to associate desired user inputs and/or grammar elements with various functions.
- a learn new input function or indication may be identified (e.g., identified based upon user input), and one or more user inputs (e.g., gestures, proximities to input elements, selection of input elements, etc.) may be tracked based upon the learn new input indication.
- the tracked one or more user inputs may then be associated with a desired function, such as a function selected and/or otherwise speci fied by a user.
- audio input provided by the user e.g., spoken words and/or phrases, etc.
- FIG. I is a block diagram of an example system 100 or architecture that may be utilized to target speech input to various vehicle functions, according to an example embodiment of the disclosure.
- the system 100 may include a wide variety of hardware and/or functional components, such as a user input component 105, a selection component 1 10, any number of sets or clusters of function-specific grammars 1 1 5, an audio capture component 120, a speech engine 125, and/or an action component 130. Each of these components will be described in greater detail below. Additionally, it will be appreciated that the system 100 of F G. 1 may be embodied in a wide variety of suitable forms, including but not limited to various systems, apparatus, and/or computer-readable media that are executed by one or more processors. One example detailed embodiment of the system 100 il lustrated in FIG. 1 is described in greater detail below with reference to FIG. 3.
- the user input component 105 may facilitate the collection, determination, and/or identification of one or more user inputs associated with a vehicle.
- a wide variety of different types of user inputs may be collected and/or identified as desired including, but not limited to, gestures made by a user, user proximity to one or more input elements, and/or user selection of one or more input elements (e.g., physical input elements such as switches, knobs, buttons, etc.).
- a wide variety of suitable user input collection devices may be utilized to collect and/or identify user input, such as one or more image capture devices, one or more proximity sensors, and/or one or more input elements.
- the selection component 1 10 may identify or determine a function associated with the vehicle. A wide variety of function-speci fic information may then be identi fied and/or selected by the selection component 1 10. For example, a set of grammar elements (e.g., voice commands, etc.) associated with the function may be selected. In certain embodiments, a set or cluster of function-specific grammars 1 15 associated with the function may be selected. In this regard, the received user input may be utilized to target speech recognition to grammar elements associated with a desired function.
- the audio capture component 120 may be utilized to collect or capture audio input associated with a user.
- a microphone may be utilized to collect an audio signal including voice commands (e.g., words, phrases, etc.) spoken by a user.
- the speech engine 125 may receive the audio input and evaluate the received audio input utilizing the grammar elements associated with the selected or desired function. In this regard, the speech engine 125 may identify a grammar element or voice command associated with the selected function.
- a wide variety of suitable speech recognition algorithms and/or techniques may be utilized as desired to identify a grammar element or voice command spoken by the user.
- a grammar element may be identified.
- a wide variety of suitable outputs, instructions, and/or control actions may be taken.
- the action component 130 may generate one or more control signals that are provided to any number of vehicle applications and/or components associated with the selected function.
- the action component 1 30 may translate a received and identified voice command into a formal that may be processed by an application associated with the selected function.
- FIG. 2 is a simplified schematic diagram 200 illustrating example techniques for obtaining user input associated with targeted speech recognition.
- a user's hand 205, a vehicle audio control panel 210, and a vehicle climate control panel 215 are depicted.
- the audio control panel 2 10 may be associated with one or more audio control functionalities
- the climate control panel 2 15 may be associated with one or more climate control functionalities.
- each of the control panels 2 1 , 2 15 may include any number of physical input elements, such as various knobs, buttons, switches, and/or touch screen displays.
- each of the control panels may include or be associated with one or more proximity sensors configured to detect proximity of the user's hand 205 (or other object).
- each of the control panels may be associated with one or more designated input regions within the vehicle.
- a designated input region on the dash, console, or other location within the vehicle may be associated with audio controls.
- a designated input region may include one or more proximity sensors.
- a wide variety of suitable methods and/or techniques may be utilized as desired to identify, collect, and or obtain user input associated with the control panels 210, 215 and/or their underlying functions.
- the motion of the user's hand may be tracked in order to identify a gesture indicative of a control panel or underlying function.
- a wide variety of different types of gestures may be identified.
- a predetermined motion (or series of motions) associated with an audio control function may be identified based upon tracking hand 205 and/or finger movement.
- the user may point to a control panel or associated input region, and the pointing may be identified as a gesture.
- an associated input region may be identified as a gesture based upon an evaluation of image data. Any of the identified gestures may be evaluated in order to select a desired underlying function, such as a function associated with one of the control panels 210, 215.
- a desired underlying function such as a function associated with one of the control panels 210, 215.
- one or more proximity sensors may be utilized to detect and or determine proximity between the user's hand 205 and a control panel and/or an input element (e.g., a physical input element, an input region, etc.) associated with the control panel.
- a desired function may then be selected based at least in part upon an evaluation of the determined proximity.
- an audio control function may be selected based upon a determined proximity between the user's hand 205 and the audio control panel 2 10.
- an audio tuning function (e.g., radio tuning, satell ite radio tuning, etc.) may be selected based upon a determined proximity between the user's hand 205 and a tuning input element (e.g. a tuning knob, etc.) associated with the audio control panel 21 .
- a tuning input element e.g. a tuning knob, etc.
- a subset of applicable grammar elements for a function may be identified with varying degrees of particularity.
- a user may utilize his or her hand to select one or more physical input elements (e.g., knobs, buttons, switches, and/or elements of one or more touch screen displays).
- a desired function may then be selected based at least in part upon the selected physical input elements. For example, if one or more input elements associated with the audio control pane! 210 are selected, then an audio control function may be selected.
- a specific selected input element such as a volume input element 220 may be identified, and a function associated with the selected input element (e.g., a volume adjustment (unction, etc.) may be identified.
- grammar elements associated with a higher level function may be weighted towards a speci fic lower level function associated with the selected input element.
- an audio control function may be selected; however, while an identi fied set of grammar elements associated with audio control functionality is identified, certain commands may be weighted towards volume control. For example, a received command of "up” may result in increased audio volume; however, non-volume audio commands will slill be processed. As another example, had a tuning input element been selected, then the received command of "up" may result in tuning an audio component in an upward direction.
- FIG. 3 is a block diagram of an example speech recognition system 300 or architecture that may be utilized in various embodiments of the disclosure.
- the system 300 may be implemented or embodied as a speech recognition system.
- the system 300 may be implemented or embodied as a component of another system or device, such as an in-vehicle infotainment ("IVI") system associated with a vehicle.
- IVI in-vehicle infotainment
- one or more suitable computer- readable media may be provided for processing user inputs and/or speech inputs. These computer-readable media may include computer-executable instructions that are executed by one or more processing devices in order to process user inputs and/or associated speech inputs.
- the term "computer-readable medium” describes any form of suitable memory or memory device for retaining information in any form, including various kinds of storage devices (e.g., magnetic, optical, static, etc. ). Indeed, various embodiments of the disclosure may be implemented in a wide variety of suitable forms.
- the system 300 may include any number of suitable computing devices associated with suitable hardware and/or software for processing user inputs and/or associated speech inputs. These computing devices may also include any number of processors for processing data and executing computer-executable instructions, as well as other internal and peripheral components that are well-known in the art. Further, these computing devices may include or be in communication with any number of suitable memory devices operable to store data and/or computer-executable instructions. By executing computer-executable instructions, a special purpose computer or particular machine for targeting speech input to various vehicle functions may be formed.
- the system 300 may include one or more processors 305 and memory devices 3 10 (generally referred to as memory 3 10). Additionally, the system may include any number of other components in communication with the processors 305, such as any number of input/output ("I/O") devices 3 15, any number of vehicle audio capture devices 320 (e.g., a microphone), and/or any number of suitable applications 325.
- the I/O devices 3 15 may include any suitable devices and/or components utilized to capture user input utilized to target speech recognition, such as one or more image capture devices or image sensors 330, any number of proximity sensors 335, and/or any number of input elements 340 (e.g., buttons, knobs, switches, touch screen displays, etc.). Additionally, as desired, the I/O devices 3 15 may include a wide variety of other components that facilitate user interactions, such as one or more display devices.
- the processors 305 may include any number of suitable processing devices, such as a central processing unit (“CPU”), a digital signal processor (“DSP”), a reduced instruction set computer (“RISC”), a complex instruction set computer (“CISC”), a microprocessor, a microcontroller, a field programmable gate array (“FPGA”), or any combination thereof.
- a chipset (not shown) may be provided for controlling communications between the processors 305 and one or more of the other components of the system 300.
- the system 300 may be based on an Intel® Architecture system, and the processors 305 and chipset may be from a family of Intel® processors and chipsets, such as the Intel® Atom® processor family.
- the processors 305 may also include one or more processors as part of one or more application-speci fic integrated circuits ("ASICs") or application-specific standard products (“ASSPs”) for handling specific data processing functions or tasks. Additionally, any number of suitable I/O interfaces and/or communications interfaces (e.g., network interfaces, data bus interfaces, etc.) may facilitate communication between the processors 305 and/or other components of the system 300.
- ASICs application-speci fic integrated circuits
- ASSPs application-specific standard products
- I/O interfaces and/or communications interfaces e.g., network interfaces, data bus interfaces, etc.
- the memory 3 10 may include any number of suitable memory devices, such as caches, read-only memory devices, random access memory (“RAM”), dynamic RAM (“DRAM”), static RAM (“SRAM”), synchronous dynamic RAM (“SDRAM”), double data rate (“DDR”) SDRAM (“DDR-SDRAM”), RAM-BUS DRAM (“RDRAM”), flash memory devices, electrically erasable programmable read only memory (“EEPROM”), non-volatile RAM (“NVRAM”), universal serial bus (“USB”) removable memory, magnetic storage devices, removable storage devices (e.g., memory cards, etc.), and/or non-removable storage devices.
- the memory 3 10 may include internal memory devices and/or external memory devices in communication with the system 300.
- the memory 3 10 may store data, executable instructions, and/or various program modules utilized by the processors 305. Examples of data that may be stored by the memory 310 include data files 342, information associated with grammar elements 344. information associated with one or more user profiles 346, and or any number of suitable program modules and/or applications that may be executed by the processors 305, such as an operating system ("OS") 348, one or more input processing modules 350, and/or one or more speech recognition modules 352.
- OS operating system
- the data files 342 may include any suitable data that facilitates the operation of the system 300, the identi fication and processing of user input, and/or the processing of speech input.
- the stored data files 342 may include, but are not limited to, information associated with the identification of users, information associated with vehicle functions, information associated with respective grammar elements for the vehicle functions, information associated with the identification of various types of user inputs, information associated with the vehicle applications 325, and/or a wide variety of other vehicle and/or speech recognition-related information.
- the grammar element information 344 may include a wide variety of information associated with a plurality of different grammar elements (e.g., commands, speech inputs, etc.) that may be recognized by the speech recognition modules 352.
- the grammar element information 344 may include a plurality of grammar elements associated with any number of functions.
- the plurality of grammar elements may be grouped into any number of subsets associated with various functions.
- the user profiles 346 may include a wide variety of user preferences and/or parameters associated with various users (e.g., various drivers of a vehicle, etc.) including, but not limited to, identification information for one or more users, user preferences associated with the processing of speech input, user preferences associated with grammar elements to be associated with various functions, and/or user preferences associated with inputs to be associated with various functions.
- the OS 348 may be a suitable module or application that facilitates the general operation of the system 300, as well as the execution of other program modules, such as the input processing modules 350 and/or the speech recognition modules 352.
- the input processing modules 350 may include any number of suitable software modules and/or applications that facilitate the identification of user inputs and/or the selection of functions based at least in part upon the user inputs.
- an input processing module 350 may receive user input data and/or data from one or more I/O devices 315, such as measurements data, image data, and/or data associated with selected input elements.
- the input processing module 350 may evaluate the received data in order to identify a function associated with user input.
- grammar elements associated with the function may be identi fied and/or determined.
- an identi fication o f the function may be provided to the speech recognition modules 352.
- the function-specific grammar elements may be evaluated in conjunction with received audio input, and targeted speech recognition may be performed.
- a wide variety of different types of user inputs may be identified by the input processing modules 350 as desired in various embodiments including, but not limited to, a user gesture, user proximity to an input element, and/or user selection of an input element.
- an image sensor 330 e.g., a camera, etc.
- the collected images may be evaluated and/or processed by the input processing modules 350 to identify a gesture made by the user.
- gestures may be identified as desired, such as a gesture associated with a hand movement (e.g., complete hand movement, finger movement, etc.) and/or a gesture associated with an indication of (e.g., contact with, proximity to, pointing to, etc.) a defined region of interest within the vehicle.
- a desired function may then be identified or selected based at least in part upon an evaluation of the gesture.
- one or more proximity sensors 335 may be uti lized to determine when the user (e.g., a user's hand, etc.) is in proximity with an input element (e.g., a switch, button, knob, input region, etc.), and a desired function may be identified or selected based upon the determined proximity.
- a user selection of one or more input elements 340 e.g., a switch, knob, etc.
- the speech recognition modules 352 may include any number of suitable software modules and/or applications that facilitate the processing of received speech input.
- a speech recognition module 352 may identi fy applicable grammar elements associated with a vehicle function, such as a function selected based upon the evaluation of user input.
- the applicable grammar elements for a function may be a subset of a plurality of grammar elements available for processing by the speech recognition modules 352.
- the grammar elements may be accessed and/or obtained from a wide variety of suitable sources, such as internal memory and/or any number of external devices (e.g., network servers, cloud servers, user devices, etc.).
- the speech recognition module 352 may evaluate the speech input in light of the function-speci fic grammar elements in order to determine or identify a correspondence between the received speech input and a grammar element. Once a grammar element (or plurality of grammar elements) has been identified as matching the speech input, the speech recognition module 352 may generate and/or output a wide variety of information associated with the grammar element. For example, an identified grammar element may be translated into an input thai is provided to an executing vehicle application 325. In this regard, voice commands may be identified and dispatched to vehicle relevant applications 325.
- the identified grammar element may be processed in order to generate one or more control signals and/or commands that are provided to a vehicle application 325, a vehicle system, and/or a vehicle component.
- a recognized speech input may be processed in order to generate output information (e.g., audio output information, display information, messages for communication, etc.) for presentation to a user.
- output information e.g., audio output information, display information, messages for communication, etc.
- an audio output associated with the recognition and/or processing of a voice command may be generated and output.
- a visual display may be updated based upon the processing of a voice command.
- the input processing modules 350 and/or the speech recognition modules 352 may be implemented as any number of suitable modules. Alternatively, a single module may perform the functions of both the input processing modules 350 and the speech recognition modules 352. A few examples of the operations of the input processing modules 350 and/or the speech recognition modules 352 are described in greater detail below with reference to FIGS. 4-7.
- the I/O devices 3 1 5 may include any number of suitable devices and/or components that facilitate the collection of information to be provided to the processors 305 and/or the input processing modules 350.
- suitable input devices include, but are not limited to, one or more image sensors 330 or image collection devices (e.g., a camera, etc.), any number of proximity sensors 335, any number of suitable input elements 340.
- the I/O devices 315 may additionally include any number of suitable output devices that facilitate the output of information to users. Examples of suitable output devices include, but are not limited to, one or more speakers and/or one or more displays.
- the displays may include any number of suitable display devices, such as a liquid crystal display (“LCD”), a light-emitting diode (“LED”) display, an organic light-emitting diode (“OLED”) display, and/or a touch screen display.
- suitable display devices such as a liquid crystal display (“LCD”), a light-emitting diode (“LED”) display, an organic light-emitting diode (“OLED”) display, and/or a touch screen display.
- LCD liquid crystal display
- LED light-emitting diode
- OLED organic light-emitting diode
- touch screen display a touch screen display.
- Other suitable input and/or output devices may be utilized as desired.
- the image sensors 330 may include any known devices that convert optical images to an electronic signal, such as cameras, charge-coupled devices ("CCDs”), complementary metal oxide semiconductor (“CMOS”) sensors, or the like.
- CCDs charge-coupled devices
- CMOS complementary metal oxide semiconductor
- data collected by the image sensors 330 may be processed in order to determine or identify a wide variety of suitable information. For example, image data may be evaluated in order to identify users, detect user indications, and/or to detect user gestures.
- the proximity sensors 335 may include any known devices configured to delect the presence of nearby objects, such as a user's hand. In certain embodiments, presence may be detected without any physical contact between an object and a proximity sensor. Certain proximity sensors 335 may emit an electromagnetic field or a beam of electromagnetic radiation (e.g., infrared radiation, etc.). Changes in the emitted field and/or the identi fication of a return signal may then be determined and utilized to identi fy the presence and/or proximity of an object. Additionally, as desired, a proximity sensor 335 may be associated with any suitable nominal range associated with the detection of an object or target.
- electromagnetic radiation e.g., infrared radiation, etc.
- the input elements 340 may include any number of suitable physical components and/or devices configured to receive user input, as well as any number of predefined input regions associated with the receipt of user input.
- suitable physical input elements include, but are not limited to, buttons, knobs, switches, touch screens, capacitive sensing elements, etc.
- a physical input element may generate data (e.g., an electrical signal, etc.) that is provided either directly or indirectly to the input processing modules 350 for evaluation.
- identification information associated with a user selection e.g., an identification of selected input elements and/or associated functions, etc.
- An input region may be a suitable area or region of interest within a vehicle that is associated with a function.
- designated input regions on the dash, console, or other location within the vehicle may be associated with various functions.
- a gesture associated with an input region e.g., a user pointing to an input region, user motion in proximity to an input region, etc.
- a designated input region may include one or more proximity sensors.
- the audio capture devices 320 may include any number of suitable devices, such as microphones, for capturing audio signals and/or voice input, such as spoken words and/or phrases.
- the audio capture devices 320 may include microphones of any known type including, but not limited to, condenser microphones, dynamic microphones, capacitance diaphragm microphones, piezoelectric microphones, optical pickup microphones, and/or various combinations thereof.
- an audio capture device 320 may collect sound waves and/or pressure waves, and provide collected audio data (e.g., voice data) to the processors 305 and/or the speech recognition modules 352 for evaluation.
- collected audio data e.g., voice data
- collected voice data may be compared to stored profi le in formation in order to identify one or more users.
- any number of vehicle applications 325 may be associated with the system 300.
- information associated with recognized speech inputs may be provided to the applications 325.
- one or more of the applications 325 may be executed by the processors 305.
- one or more of the applications 325 may be executed by other processing devices in communication (e.g., network communication) with the processors 305.
- the applications 325 may include any number of vehicle applications associated with a vehicle including, but not limited to, one or more vehicle control applications, a climate control application, an audio system application, a window (e.g., windows, sunroof, etc.) control application, a seat control application, a display control application, a navigation control application, a Web or other network application, a communications control application, a maintenance application, an application that manages communication with user devices and/or other vehicles, an application that monitors vehicle parameters, and/or any other suitable applications.
- the system 300 or architecture described above with reference to FIG. 3 is provided by way of example only. As desired, a wide variety of other systems and/or architectures may be utilized to perform targeted processing of speech inputs. These systems and/or archileclures may include different components and/or arrangements of components than that illustrated in FIG. 3.
- FIG. 4 is a flow diagram of an example method 400 for evaluating user input to target speech recognition to a vehicle function.
- the operations of the method 400 may be performed by a suitable speech recognition system and/or one or more associated modules and/or applications, such as the speech recognition system 300 and/or the associated input processing modules 350 and/or speech recognition modules 352 illustrated in FIG. 3.
- the method 400 may begin at block 405.
- grammar elements associated with any number of respective audible commands for a plurality of vehicle functions and/or applications may be stored.
- sources for the grammar elements may be identified.
- respective subsets of the grammar elements may be associated with various vehicle functions and/or applications.
- a wide variety of different types of configuration information may be taken into account during the configuration of the grammar elements and/or speech recognition association with the grammar elements. For example, one or more users of the vehicle (e.g., a driver) may be identified, and user profile information may be obtained for the one or more users.
- the user profile information may be utilized to identify user-specific grammar elements and/or inputs (e.g., gestures, input element identifications, input element selections, etc.) associated with various functions.
- user-specific grammar elements and/or inputs e.g., gestures, input element identifications, input element selections, etc.
- suitable methods and/or techniques may be utilized to identify a user. For example, a voice sample of a user may be collected and compared to a stored voice sample. As another example, image data for the user may be collected and evaluated utilizing suitable facial recognition techniques. As another example, other biometric inputs (e.g., fingerprints, etc.) may be evaluated to identify a user.
- a user may be identified based upon determining a pairing between the vehicle and a user device (e.g., a mobile device, etc.) and/or based upon the receipt and evaluation of user identification information (e.g., a personal identi fication number, etc.) entered by the user.
- a user device e.g., a mobile device, etc.
- user identification information e.g., a personal identi fication number, etc.
- user input associated with the vehicle may be received and/or identified.
- a user gesture e.g., a gesture made by a user's hand, an indication of an input element, etc.
- a user gesture may be identified based at least in part upon an evaluation of image data received from an image sensor.
- proximity of the user e.g., a user's hand, etc.
- an input element e.g., a physical input element, an input region, etc.
- a user selection of one or more input elements may be identified.
- a vehicle function may be selected or identified based at least in part upon an evaluation of the identified user input.
- a subset of the grammar elements associated with the selected function may then be identified at block 435.
- the subset of grammar elements for a function may be pared down based at least in part upon the user input. For example, if the user input is associated with altering the volume of an audio system, then the function may be identified as an audio control function associated with audio control grammar elements. Based upon a determination that the user input is associated with volume control, the audio control grammar elements may be limited to volume control grammar elements.
- the subset of grammar elements associated with the selected function may be biased and/or weighted based upon the received user input.
- audio control grammar elements may be selected and biased towards volume control.
- audio input may be received from any number of suitable audio collection devices (e.g., microphones).
- the col lection of audio input may be initialed based at least in pari upon the identified user input. For example, when a function is selected, a microphone may be turned on or activated.
- the identified user input may be utilized to identify relevant collected audio input. For example, a buffer may be utilized to store recently collected audio input. Once a user input is identi fied, audio input captured immediately prior to, during, and/or immediately after the user input may be identified for processing. In either case, the collected audio may be evaluated at block 445 utilizing the grammar elements associated with the identified function.
- a grammar element (or plurality of grammar elements) or command associated with the identified function may be identified as corresponding to the collected audio input.
- a grammar element (or plurality of grammar elements) associated with the function has been identified as matching or otherwise corresponding to the audio input
- suitable information associated with the grammar element may be output at block 450, such as an indication of the identified grammar element or a control signal associated with the function.
- an audio system function has been identi fied
- an "up" command may be identified and processed in order to turn the volume of the radio up.
- i f a window function has been identified
- an "up” command may be identified and processed in order to roll a window up.
- a "firmer,” “softer,” or “more lumbar support” command may be processed in order to adjust seat controls.
- a wide variety of suitable commands may be processed with respect to various vehicle functions.
- the method 400 may end following block 450.
- FIG. 5 is a flow diagram of an example method 500 for identifying a gesture associated with the targeting of speech recognition.
- the method 500 illustrates one example implementation of the operations of block 415 illustrated in FIG. 4, as well as the subsequent evaluation of received audio input.
- the operations of the method 500 may be performed by a suitable speech recognition system and/or one or more associated modules and/or applications, such as the speech recognition system 300 and/or the associated input processing modules 350 and/or the speech recognition modules 352 illustrated in FIG. 3.
- the method 500 may begin at block 505.
- an object of interest for purposes of gesture recognition may be identified.
- a user's hand e.g., a driver's hand, etc.
- image data associated with the identified object of interest may be received.
- an image sensor may capture images associated with the movement of the object of interest, and the captured images may be received for processing.
- the image sensor may process the captured images, and information associated with the performed processing (e.g., information associated with identified gestures, etc.) may be received.
- a gesture associated with the object of interest may be identified.
- a wide variety of different types of gestures may be identified as desired in various embodiments of the invention.
- motion of the object of interest may be tracked and evaluated in order to identi fy a gesture, such as a user making any number of motions and/or object configurations (e.g., a back and forth motion to denote control of a sunroof, an up and down motion to denote control of a window, a sequence of motions and/or hand configurations associated with control of an audio system or climate control system, etc.).
- proximity of the object to and/or an indication of a region or object of interest may be identified, such as a user pointing to an input element or other object (e.g., pointing to a window, pointing to an audio control panel, pointing to an input region, etc.) or the user placing the object of interest in a position near to or touching an input element or other object.
- an input element or other object e.g., pointing to a window, pointing to an audio control panel, pointing to an input region, etc.
- the user placing the object of interest in a position near to or touching an input element or other object.
- a function associated with an identified gesture may be identified or determined.
- grammar elements associated with the function may be identified and/or accessed.
- audio capture may be initialed and/or evaluated at block 535, and received audio input may be processed at block 540 in order to identify and/or process voice commands associated with the function. The method may end following block 540.
- FIG. 6 is a flow diagram of an example method for identi fying proximity information associated with the targeting of speech recognition.
- the method 600 illustrates one example implementation of the operations of block 420 illustrated in FIG. 4, as well as the subsequent evaluation of received audio input. As such, the operations of the method 600 may be performed by a suitable speech recognition system and/or one or more associated modules and/or applications, such as the speech recognition system 300 and/or the associated input processing modules 350 and/or the speech recognition modules 352 illustrated in FIG. 3.
- the method 600 may begin at block 605. At block 605, the proximity of the user and/or an object associated with the user
- a function associated with the input element may be identi fied or determined.
- grammar elements associated with the function may be identified and/or accessed.
- audio capture may be initiated and/or evaluated at block 61 5, and received audio input may be processed at block 620 in order to identify and/or process voice commands associated with the function.
- the method 600 may end following block 620.
- FIG. 7 is a flow diagram of an example method 700 for associating user inputs with grammar elements for speech recognition.
- the operations of the method 700 may be performed by a suitable speech recognition system and/or one or more associated modules and/or applications, such as the speech recognition system 300 and/or the associated input processing modules 350 and/or the speech recognition modules 352 illustrated in FIG. 3.
- the method 700 may begin at block 705.
- a learning indication may be identified. For example, a learn new input function or indication may be identified based upon received user input (e.g., a learning gesture, a voice command, a selection of associated input elements, etc.). In certain embodiments, a learning indication may be identified in association with a designated function. In other embodiments, a learning indication may be identi fied, and a function may be subsequently designated, selected, or defined. Once a learning indication has been identi fied, a learning mode may be entered.
- one or more user inputs may be tracked and/or identified.
- the tracked one or more user inputs may then be associated with a desired function al block 715, such as a function selected and/or otherwise speci fied by a user.
- a user may define or speci fy user inputs associated with the selection of a particular function for targeted voice recognition.
- the user may be prompted at block 720 for audio input to be associated with the function.
- grammar elements for the function may be modified and/or new grammar elements for the function may be established.
- audio data may be received (e.g., collected from one or more suitable audio capture devices, etc.) at block 725.
- at block 730 at least a portion of the received audio data may be associated with grammar elements (e.g., grammar elements to be modified, new grammar elements, etc.) for the function.
- grammar elements e.g., grammar elements to be modified, new grammar elements, etc.
- the operations of the method 700 may end following block 730.
- Certain embodiments of the disclosure described herein may have the technical effect of targeting speech recognition based at least in part upon an evaluation of received user input. For example, in a vehicular environment, a gesture, selection of input elements, and/or other inputs made by a user may be utilized to identify a desired function, and grammar elements associated with the function may be identified for speech recognition purposes. As a result, relatively efficient and intuitive speech recognition may be performed without the user traversing through a hierarchy of speech commands.
- These computer-executable program instructions may be loaded onto a special- purpose computer or other particular machine, a processor, or other programmable data processing apparatus to produce a particular machine, such that the instructions that execute on the computer, processor, or oiher programmable data processing apparatus create means for implementing one or more functions specified in the flow diagram block or blocks.
- These computer program instructions may also be stored in a computer- readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means thai implement one or more functions specified in the flow diagram block or blocks.
- certain embodiments may provide for a computer program product, comprising a computer-usable medium having a computer-readable program code or program instructions embodied therein, said computer-readable program code adapted to be executed to implement one or more functions speci fied in the flow diagram block or blocks.
- the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational elements or steps to be performed on the computer or other programmable apparatus to produce a computer- implemented process such that the instructions that execute on the computer or other programmable apparatus provide elements or steps for implementing the functions specified in the flow diagram block or blocks.
- blocks of the block diagrams and flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, can be implemented by special-purpose, hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special-purpose hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Mechanical Engineering (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2011/067847 WO2013101066A1 (fr) | 2011-12-29 | 2011-12-29 | Accès direct à une grammaire |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2798632A1 true EP2798632A1 (fr) | 2014-11-05 |
EP2798632A4 EP2798632A4 (fr) | 2015-10-07 |
Family
ID=48698302
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP11879105.2A Withdrawn EP2798632A4 (fr) | 2011-12-29 | 2011-12-29 | Accès direct à une grammaire |
Country Status (5)
Country | Link |
---|---|
US (1) | US9487167B2 (fr) |
EP (1) | EP2798632A4 (fr) |
JP (1) | JP5916888B2 (fr) |
CN (1) | CN104040620B (fr) |
WO (1) | WO2013101066A1 (fr) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11592723B2 (en) | 2009-12-22 | 2023-02-28 | View, Inc. | Automated commissioning of controllers in a window network |
US11054792B2 (en) | 2012-04-13 | 2021-07-06 | View, Inc. | Monitoring sites containing switchable optical devices and controllers |
CN104040620B (zh) | 2011-12-29 | 2017-07-14 | 英特尔公司 | 用于进行直接语法存取的装置和方法 |
US20240046928A1 (en) * | 2012-04-13 | 2024-02-08 | View, Inc. | Controlling optically-switchable devices |
US10964320B2 (en) * | 2012-04-13 | 2021-03-30 | View, Inc. | Controlling optically-switchable devices |
EP2862163A4 (fr) * | 2012-06-18 | 2015-07-29 | Ericsson Telefon Ab L M | Procédés et noeuds permettant d'activer et de produire une entrée dans une application |
US9798799B2 (en) * | 2012-11-15 | 2017-10-24 | Sri International | Vehicle personal assistant that interprets spoken natural language input based upon vehicle context |
US8818716B1 (en) | 2013-03-15 | 2014-08-26 | Honda Motor Co., Ltd. | System and method for gesture-based point of interest search |
EP2857239A1 (fr) * | 2013-10-03 | 2015-04-08 | Volvo Car Corporation | Pare-soleil numérique pour verre automobile |
KR20150066156A (ko) * | 2013-12-06 | 2015-06-16 | 삼성전자주식회사 | 디스플레이 장치 및 이의 제어 방법 |
EP3114640B1 (fr) | 2014-03-05 | 2022-11-02 | View, Inc. | Surveillance de sites comprenant des dispositifs optiques commutables et des organes de commande |
US9751406B2 (en) * | 2014-04-03 | 2017-09-05 | Audi Ag | Motor vehicle and method for controlling a climate control system in a motor vehicle |
PL3037916T3 (pl) * | 2014-12-24 | 2021-08-02 | Nokia Technologies Oy | Monitorowanie |
DE102015200006A1 (de) * | 2015-01-02 | 2016-07-07 | Volkswagen Ag | Vorrichtung und Verfahren zur Unterstützung eines Anwenders vor einer Bedienung eines Schalters zur elektromotorischen Verstellung eines Teils eines Fortbewegungsmittels |
DE102015007361B3 (de) * | 2015-06-10 | 2016-02-18 | Audi Ag | Verfahren zum Betreiben wenigstens einer Funktionseinrichtung eines Kraftfahrzeugs |
US9921805B2 (en) * | 2015-06-17 | 2018-03-20 | Lenovo (Singapore) Pte. Ltd. | Multi-modal disambiguation of voice assisted input |
JP2017090613A (ja) * | 2015-11-09 | 2017-05-25 | 三菱自動車工業株式会社 | 音声認識制御システム |
US10388280B2 (en) * | 2016-01-27 | 2019-08-20 | Motorola Mobility Llc | Method and apparatus for managing multiple voice operation trigger phrases |
AU2017257789B2 (en) * | 2016-04-26 | 2022-06-30 | View, Inc. | Controlling optically-switchable devices |
JP2020144275A (ja) * | 2019-03-07 | 2020-09-10 | 本田技研工業株式会社 | エージェント装置、エージェント装置の制御方法、およびプログラム |
CN110022427A (zh) * | 2019-05-22 | 2019-07-16 | 乐山师范学院 | 汽车使用智能辅助系统 |
KR20210133600A (ko) * | 2020-04-29 | 2021-11-08 | 현대자동차주식회사 | 차량 음성 인식 방법 및 장치 |
US11967306B2 (en) | 2021-04-14 | 2024-04-23 | Honeywell International Inc. | Contextual speech recognition methods and systems |
KR20220150640A (ko) * | 2021-05-04 | 2022-11-11 | 현대자동차주식회사 | 차량 및 그의 제어방법 |
Family Cites Families (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5699456A (en) * | 1994-01-21 | 1997-12-16 | Lucent Technologies Inc. | Large vocabulary connected speech recognition system and method of language representation using evolutional grammar to represent context free grammars |
JPH0934488A (ja) | 1995-07-18 | 1997-02-07 | Mazda Motor Corp | 車載機器の音声操作装置 |
US7085710B1 (en) * | 1998-01-07 | 2006-08-01 | Microsoft Corporation | Vehicle computer system audio entertainment system |
KR100259918B1 (ko) * | 1998-03-05 | 2000-06-15 | 윤종용 | 핸즈프리키트의 쇼트메시지 음성합성 장치 및 방법 |
WO1999057648A1 (fr) * | 1998-05-07 | 1999-11-11 | Art - Advanced Recognition Technologies Ltd. | Systeme de commande de composants d'un vehicule par voie manuscrite et par la parole |
DE69814181T2 (de) * | 1998-09-22 | 2004-03-04 | Nokia Corp. | Verfahren und vorrichtung zur konfiguration eines spracherkennungssystems |
US20050131695A1 (en) * | 1999-02-04 | 2005-06-16 | Mark Lucente | System and method for bilateral communication between a user and a system |
US6430531B1 (en) * | 1999-02-04 | 2002-08-06 | Soliloquy, Inc. | Bilateral speech system |
JP2001216069A (ja) * | 2000-02-01 | 2001-08-10 | Toshiba Corp | 操作入力装置および方向検出方法 |
US6574595B1 (en) * | 2000-07-11 | 2003-06-03 | Lucent Technologies Inc. | Method and apparatus for recognition-based barge-in detection in the context of subword-based automatic speech recognition |
US7139709B2 (en) * | 2000-07-20 | 2006-11-21 | Microsoft Corporation | Middleware layer between speech related applications and engines |
US7085723B2 (en) * | 2001-01-12 | 2006-08-01 | International Business Machines Corporation | System and method for determining utterance context in a multi-context speech application |
JP2003005781A (ja) * | 2001-06-20 | 2003-01-08 | Denso Corp | 音声認識機能付き制御装置及びプログラム |
US6868383B1 (en) * | 2001-07-12 | 2005-03-15 | At&T Corp. | Systems and methods for extracting meaning from multimodal inputs using finite-state devices |
US7149694B1 (en) * | 2002-02-13 | 2006-12-12 | Siebel Systems, Inc. | Method and system for building/updating grammars in voice access systems |
US7548847B2 (en) * | 2002-05-10 | 2009-06-16 | Microsoft Corporation | System for automatically annotating training data for a natural language understanding system |
US7986974B2 (en) | 2003-05-23 | 2011-07-26 | General Motors Llc | Context specific speaker adaptation user interface |
US20050091036A1 (en) * | 2003-10-23 | 2005-04-28 | Hazel Shackleton | Method and apparatus for a hierarchical object model-based constrained language interpreter-parser |
US7395206B1 (en) * | 2004-01-16 | 2008-07-01 | Unisys Corporation | Systems and methods for managing and building directed dialogue portal applications |
US7778830B2 (en) * | 2004-05-19 | 2010-08-17 | International Business Machines Corporation | Training speaker-dependent, phrase-based speech grammars using an unsupervised automated technique |
US7925506B2 (en) * | 2004-10-05 | 2011-04-12 | Inago Corporation | Speech recognition accuracy via concept to keyword mapping |
US7630900B1 (en) * | 2004-12-01 | 2009-12-08 | Tellme Networks, Inc. | Method and system for selecting grammars based on geographic information associated with a caller |
CN1815556A (zh) * | 2005-02-01 | 2006-08-09 | 松下电器产业株式会社 | 可利用语音命令操控车辆的方法及系统 |
US7949529B2 (en) * | 2005-08-29 | 2011-05-24 | Voicebox Technologies, Inc. | Mobile systems and methods of supporting natural language human-machine interactions |
US7729911B2 (en) * | 2005-09-27 | 2010-06-01 | General Motors Llc | Speech recognition method and system |
US8311836B2 (en) * | 2006-03-13 | 2012-11-13 | Nuance Communications, Inc. | Dynamic help including available speech commands from content contained within speech grammars |
US8301448B2 (en) * | 2006-03-29 | 2012-10-30 | Nuance Communications, Inc. | System and method for applying dynamic contextual grammars and language models to improve automatic speech recognition accuracy |
US7778837B2 (en) * | 2006-05-01 | 2010-08-17 | Microsoft Corporation | Demographic based classification for local word wheeling/web search |
US7721207B2 (en) | 2006-05-31 | 2010-05-18 | Sony Ericsson Mobile Communications Ab | Camera based control |
US8332218B2 (en) * | 2006-06-13 | 2012-12-11 | Nuance Communications, Inc. | Context-based grammars for automated speech recognition |
US8214219B2 (en) * | 2006-09-15 | 2012-07-03 | Volkswagen Of America, Inc. | Speech communications system for a vehicle and method of operating a speech communications system for a vehicle |
US20080140390A1 (en) * | 2006-12-11 | 2008-06-12 | Motorola, Inc. | Solution for sharing speech processing resources in a multitasking environment |
US20080154604A1 (en) * | 2006-12-22 | 2008-06-26 | Nokia Corporation | System and method for providing context-based dynamic speech grammar generation for use in search applications |
US20090055180A1 (en) * | 2007-08-23 | 2009-02-26 | Coon Bradley S | System and method for optimizing speech recognition in a vehicle |
US20090055178A1 (en) * | 2007-08-23 | 2009-02-26 | Coon Bradley S | System and method of controlling personalized settings in a vehicle |
US9031843B2 (en) * | 2007-09-28 | 2015-05-12 | Google Technology Holdings LLC | Method and apparatus for enabling multimodal tags in a communication device by discarding redundant information in the tags training signals |
WO2009045861A1 (fr) * | 2007-10-05 | 2009-04-09 | Sensory, Incorporated | Systèmes et procédés pour effectuer une reconnaissance vocale à l'aide de gestes |
DE102008051757A1 (de) * | 2007-11-12 | 2009-05-14 | Volkswagen Ag | Multimodale Benutzerschnittstelle eines Fahrerassistenzsystems zur Eingabe und Präsentation von Informationen |
US8140335B2 (en) * | 2007-12-11 | 2012-03-20 | Voicebox Technologies, Inc. | System and method for providing a natural language voice user interface in an integrated voice navigation services environment |
CN101323305A (zh) * | 2008-05-14 | 2008-12-17 | 奇瑞汽车股份有限公司 | 车载语音识别控制系统及其控制方法 |
US8407057B2 (en) * | 2009-01-21 | 2013-03-26 | Nuance Communications, Inc. | Machine, system and method for user-guided teaching and modifying of voice commands and actions executed by a conversational learning system |
US20100312469A1 (en) * | 2009-06-05 | 2010-12-09 | Telenav, Inc. | Navigation system with speech processing mechanism and method of operation thereof |
WO2011082340A1 (fr) * | 2009-12-31 | 2011-07-07 | Volt Delta Resources, Llc | Procédé et système pour le traitement de multiples résultats de reconnaissance vocale à partir d'un seul énoncé |
US8296151B2 (en) * | 2010-06-18 | 2012-10-23 | Microsoft Corporation | Compound gesture-speech commands |
US8700392B1 (en) * | 2010-09-10 | 2014-04-15 | Amazon Technologies, Inc. | Speech-inclusive device interfaces |
US8893054B2 (en) * | 2010-12-08 | 2014-11-18 | At&T Intellectual Property I, L.P. | Devices, systems, and methods for conveying gesture commands |
US9008904B2 (en) * | 2010-12-30 | 2015-04-14 | GM Global Technology Operations LLC | Graphical vehicle command system for autonomous vehicles on full windshield head-up display |
US20120226498A1 (en) * | 2011-03-02 | 2012-09-06 | Microsoft Corporation | Motion-based voice activity detection |
CN104040620B (zh) | 2011-12-29 | 2017-07-14 | 英特尔公司 | 用于进行直接语法存取的装置和方法 |
US9092394B2 (en) * | 2012-06-15 | 2015-07-28 | Honda Motor Co., Ltd. | Depth based context identification |
-
2011
- 2011-12-29 CN CN201180076089.4A patent/CN104040620B/zh active Active
- 2011-12-29 US US13/977,535 patent/US9487167B2/en active Active
- 2011-12-29 EP EP11879105.2A patent/EP2798632A4/fr not_active Withdrawn
- 2011-12-29 WO PCT/US2011/067847 patent/WO2013101066A1/fr active Application Filing
- 2011-12-29 JP JP2014548779A patent/JP5916888B2/ja active Active
Also Published As
Publication number | Publication date |
---|---|
US9487167B2 (en) | 2016-11-08 |
CN104040620B (zh) | 2017-07-14 |
WO2013101066A1 (fr) | 2013-07-04 |
JP5916888B2 (ja) | 2016-05-11 |
JP2015509204A (ja) | 2015-03-26 |
EP2798632A4 (fr) | 2015-10-07 |
US20140229174A1 (en) | 2014-08-14 |
CN104040620A (zh) | 2014-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9487167B2 (en) | Vehicular speech recognition grammar selection based upon captured or proximity information | |
US20140244259A1 (en) | Speech recognition utilizing a dynamic set of grammar elements | |
KR102528466B1 (ko) | 복수 화자의 음성 신호 처리 방법 및 그에 따른 전자 장치 | |
US20180046255A1 (en) | Radar-based gestural interface | |
US9953634B1 (en) | Passive training for automatic speech recognition | |
US8285545B2 (en) | Voice command acquisition system and method | |
JP6432233B2 (ja) | 車両用機器制御装置、制御内容検索方法 | |
US10042432B2 (en) | Programmable onboard interface | |
US20140058584A1 (en) | System And Method For Multimodal Interaction With Reduced Distraction In Operating Vehicles | |
CN105355202A (zh) | 语音识别装置、具有语音识别装置的车辆及其控制方法 | |
US20230102157A1 (en) | Contextual utterance resolution in multimodal systems | |
WO2014070872A2 (fr) | Système et procédé pour interaction multimodale à distraction réduite dans la marche de véhicules | |
JP2017090611A (ja) | 音声認識制御システム | |
US20140270382A1 (en) | System and Method for Identifying Handwriting Gestures In An In-Vehicle Information System | |
JP2017090613A (ja) | 音声認識制御システム | |
US20140168068A1 (en) | System and method for manipulating user interface using wrist angle in vehicle | |
JP2017090612A (ja) | 音声認識制御システム | |
US10655981B2 (en) | Method for updating parking area information in a navigation system and navigation system | |
CN114678021B (zh) | 音频信号的处理方法、装置、存储介质及车辆 | |
US11996099B2 (en) | Dialogue system, vehicle, and method of controlling dialogue system | |
US9715878B2 (en) | Systems and methods for result arbitration in spoken dialog systems | |
US11580958B2 (en) | Method and device for recognizing speech in vehicle | |
US20140343947A1 (en) | Methods and systems for managing dialog of speech systems | |
US20240300328A1 (en) | Vehicle And Control Method Thereof | |
US20230197076A1 (en) | Vehicle and control method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20140616 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: ROSARIO, BARBARA Inventor name: GRAUMANN, DAVID, L. |
|
DAX | Request for extension of the european patent (deleted) | ||
RA4 | Supplementary search report drawn up and despatched (corrected) |
Effective date: 20150902 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 15/22 20060101ALI20150828BHEP Ipc: G06K 9/00 20060101ALI20150828BHEP Ipc: G06F 3/16 20060101ALI20150828BHEP Ipc: B60R 16/037 20060101ALI20150828BHEP Ipc: G06F 3/01 20060101ALI20150828BHEP Ipc: G10L 15/00 20130101AFI20150828BHEP Ipc: G10L 15/26 20060101ALI20150828BHEP Ipc: G10L 15/183 20130101ALI20150828BHEP Ipc: G06F 3/0488 20130101ALI20150828BHEP |
|
R17P | Request for examination filed (corrected) |
Effective date: 20140616 |
|
17Q | First examination report despatched |
Effective date: 20180413 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20200108 |