WO2015037396A1 - 音声出力制御装置、プログラムおよび記録媒体 - Google Patents
音声出力制御装置、プログラムおよび記録媒体 Download PDFInfo
- Publication number
- WO2015037396A1 WO2015037396A1 PCT/JP2014/071582 JP2014071582W WO2015037396A1 WO 2015037396 A1 WO2015037396 A1 WO 2015037396A1 JP 2014071582 W JP2014071582 W JP 2014071582W WO 2015037396 A1 WO2015037396 A1 WO 2015037396A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- output
- audio
- content
- value
- Prior art date
Links
- 230000004044 response Effects 0.000 claims abstract description 7
- 230000003247 decreasing effect Effects 0.000 claims description 7
- 230000007423 decrease Effects 0.000 claims description 5
- 238000000034 method Methods 0.000 description 73
- 238000012545 processing Methods 0.000 description 23
- 238000003780 insertion Methods 0.000 description 9
- 230000037431 insertion Effects 0.000 description 9
- 238000012544 monitoring process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/162—Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3629—Guidance using speech or audio output, e.g. text-to-speech
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3661—Guidance output on an external device, e.g. car radio
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R27/00—Public address systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/13—Acoustic transducers and sound field adaptation in vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
Definitions
- the present invention relates to a sound output control device that performs sound output arbitration when a plurality of sound information exists at the same time.
- mediation of voice output is performed to limit the output target.
- mediation of audio output is performed by the following method.
- Patent Document 1 in order to prevent the same type of information from being repeatedly provided to the driver, when data having the same content is input for the second time, the data is displayed on the display, and the third time. Thereafter, it is suggested that when data having the same content is input, the data may be deleted.
- voice output mediation according to the information value of voice information can be flexibly performed.
- the audio output control device includes a control unit.
- the control unit causes the audio output device to output audio information in response to output requests supplied from a plurality of output request units that request output of audio information.
- the control unit compares the level of information value set in advance with each piece of voice information corresponding to these output requests, and gives priority to the voice information with the higher information value.
- the control unit includes a determination unit and a value variable setting unit.
- the determination unit determines whether content corresponding to each piece of audio information is output from the display device. For example, this determination may be made based on the monitoring result after monitoring the output of each content to the display device, or based on information related to the content set in advance along with the information value of the audio information. It may be broken. Alternatively, content display mediation may be performed. The determination may be made based on the arbitration result. The determination unit may perform the determination before the content is actually displayed, or may perform the determination after the content is actually displayed.
- the value variable setting unit is configured to variably set the information value of the audio information corresponding to the content according to the determination result by the determination unit. With this configuration, when a plurality of pieces of voice information become output candidates at the same time, it is possible to perform variable setting of the information value considering whether or not content corresponding to each piece of voice information is displayed.
- the schedule for outputting audio information can be adjusted in a flexible manner. Therefore, it is possible to flexibly adjust the audio output according to the information value of the audio information.
- the value variable setting unit may reduce the information value of the audio information corresponding to the content when the content is output from the display device. In this case, all information can be easily output by either one of audio output and content display, and a large amount of information can be efficiently presented to the user.
- the value variable setting unit may increase the information value of the audio information corresponding to the content when the content is output from the display device.
- the more information that can be displayed the easier it is to output audio. Therefore, the voice output in addition to the display can increase the opportunities for highlighting information to be displayed and presenting it to the user.
- the value variable setting unit can variably set the information value of the voice information in various aspects. For example, when the content is not output from the display device, the information value of the audio information corresponding to the content may be increased or decreased. For example, when the information value is increased in this way, the information value of other audio information that is displayed as content is relatively decreased. Therefore, for the reasons already described, a large amount of information can be efficiently transmitted to the user. It leads to presentation. Also, for example, when the information value is reduced as described above, the information value of other audio information for which content display is performed is relatively increased. Therefore, information that is displayed for content for the reasons already described. It leads to increasing the opportunity to be emphasized and presented to the user.
- the value variable setting unit may variably set the information value of the audio information corresponding to the content according to the display form of the content on the display device.
- the amount of display information of content when the amount of display information of content is small, the amount of decrease in the information value of audio information may be reduced, and when the amount of display information of content is large, the amount of decrease in the information value of audio information may be increased. It is done. In this case, the amount of information presented to the user tends to be constant with respect to a plurality of pieces of audio information that are output candidates at the same time. Therefore, a lot of information can be presented to the user fairly.
- the increase amount of the information value of the audio information when the amount of display information of the content is small, the increase amount of the information value of the audio information is decreased, and when the amount of display information of the content is large, the increase amount of the information value of the audio information may be increased. Conceivable. In this case, audio output is more easily performed as the content display information amount is larger. Accordingly, it is possible to increase the opportunities for further emphasizing information with a large amount of display information and presenting it to the user.
- the program can be distributed on the market. Specifically, it is a program for causing a computer to function as the control unit.
- this program is incorporated into one or a plurality of computers, it is possible to achieve the same effect as that achieved by the audio output control device of one aspect of the present invention.
- the program according to one aspect of the present invention may be stored in a ROM or flash memory incorporated in a computer, and loaded into the computer from the ROM or flash memory, or loaded into the computer via a network. And may be used.
- the above program may be used by being recorded on a recording medium of any form readable by a computer.
- the recording medium may be a tangible non-transitory recording medium.
- Examples of the recording medium include a portable semiconductor memory (for example, a USB memory or a memory card (registered trademark)).
- FIG. 6A illustrates a display screen when a display content corresponding to audio information is not output
- FIG. 6B is an explanatory diagram illustrating a display screen when a display content corresponding to audio information is output. It is explanatory drawing which illustrates the specific example 1 of an audio mediation process.
- the output control device 1 is connected to audio output devices 30a and 30b such as speakers, an image output device 30c such as a display, and a plurality of in-vehicle devices such as a plurality of in-vehicle ECUs 20, and outputs a sound display for vehicles.
- the system is configured.
- the audio output devices 30a and 30b are provided at various locations in the passenger compartment.
- the image output device 30c is provided at a position where the driver (user) can visually recognize the vehicle interior.
- the output control device 1 is an in-vehicle device that outputs audio information and image information in response to an output request from an application operating on the in-vehicle ECU 20.
- the in-vehicle ECU 20 is a plurality of electronic control devices that execute applications that realize various functions for vehicles.
- voice and image route guidance linked with navigation and voice and image fees linked with an ETC (registered trademark, Electronic Toll Collection System) electronic fee collection system examples include guidance, driving assistance guidance based on voice and images linked to the vehicle periphery monitoring system, and various provisions such as weather information and road information based on voice and images.
- the application executed by these in-vehicle ECUs 20 requests information output regarding the information output event that has occurred.
- An output request (audio output request, image output request) is notified to the output control device 1.
- the output control device 1 performs scheduling and mediation of audio output and image output in response to the information output request notified from the application, and outputs information related to the information output event via the output devices 30a to 30c.
- the output control device 1 is an information processing device mainly composed of a CPU, a memory and the like.
- the output control device 1 includes an input / output interface 11, a control unit 12, a storage unit 14, and an output unit 16 as functional configurations.
- the input / output interface 11 is a communication interface that transmits and receives information to and from the in-vehicle ECU 20 via the in-vehicle network 40.
- Information transmitted from the in-vehicle ECU 20 to the output control device 1 is input to the control unit 12 via the input / output interface 11.
- the storage unit 14 stores a plurality of types of content such as a program defining various processes performed by the control unit 12 and the above-described audio information and image information.
- the storage unit 14 is configured by, for example, a semiconductor memory that can be read by the control unit 12.
- the control unit 12 includes a content management unit 13 and an arbitration unit 15.
- the control unit 12 may have a CPU.
- the control unit 12 can function as the content management unit 13 and the arbitration unit 15 by executing processing according to the program stored in the storage unit 14.
- the content management unit 13 acquires the following content information corresponding to the information output request output from the application when mediation of audio output and image output for a plurality of information output requests is performed, and arbitrates the acquired content information To the unit 15.
- content information is defined as supplementary information related to an information output request (hereinafter referred to as “audio output request”) output from an application, as shown in FIG.
- the arbitration unit 15 arbitrates audio output based on the content information related to the audio output request.
- information excluding the display mediation result to be described later may be stored in the storage unit 14 in advance, but in the present embodiment, it is input from the in-vehicle ECU 20 (application) in a form included in the audio output request.
- the display mediation result described later in the content information is described by the mediation unit 15 as the mediation result of image output when there is image information related to or accompanying audio information (hereinafter referred to as “display content”). It is.
- the content information includes information such as a deferrable time limit, content length, significant information end time, interruptable time, life, voice information value, display interlocking degree, and display mediation result, as shown in FIG.
- the delay possible time limit is information indicating an allowable delay time from the time when an audio output event occurs in the application to the execution of audio information output.
- the delay time limit is set shorter for highly urgent voice information that needs to be presented to the user at an early stage. In addition, for voice information with low urgency, a longer delay time limit is set.
- Content length is information indicating the time required to output audio information to the end.
- the significant information end time indicates a time at which the content of the audio information can be substantially transmitted to the user.
- the significant information end time can be set to be shorter than the content length when the wording at the end of the sentence that does not include significant content can be omitted for the audio information being output. Specifically, the time until the output of the voice “beyond, to the right” is completed as the significant information end time for the voice information “from now, to the right”.
- Interruptable time is information indicating the time at which there is a semantic break such as a break in the audio information being output.
- the life is information indicating an expiration date that must be transmitted after the audio output event has occurred in the application.
- the life may be determined by using a preset time in the system of the output control apparatus 1 or may be specified by an application that has issued a voice output request. For example, when there is no designation from the application that has issued the audio output request, the life can be set as a delay possible deadline + content length. Alternatively, it is possible to specify a specific deadline from the application that has issued the audio output request, such as the time when the audio output event occurs + 1 hour (to be output within 1 hour).
- Voice information value is information that defines the information value (priority) of voice information. For example, a default value can be defined as the voice information value for each type (category) of voice information. For example, the audio information can be classified into categories according to the purpose and contents of the audio information such as safety notification, failure notification, route guidance, fee guidance, and entertainment information.
- the display interlocking degree is information that defines the degree of connection with the display content. Specifically, the display interlocking degree is binarized based on whether it is better to present audio information to the user together with the display content or whether it is sufficient to present either the audio information or the display content to the user. To express. When representing the former, the display interlocking degree is set to a value of 1, and when representing the latter, the display interlocking degree is set to a value of zero.
- the display mediation result is described by the mediation unit 15 as an image output mediation result when there is display content corresponding to audio information.
- the display mediation result represents whether the display content corresponding to the audio information is output from the image output device 30c, and also represents the size of the display area of the display content and the display form. Examples of the display form include a form in which the display content is represented by characters, a form represented by an icon, and a form represented by both.
- the storage unit 14 includes information necessary for the content management unit 13 to acquire content information related to a voice output request, such as a content information template and the content of content information used for general purposes. Based on the information stored in the storage unit 14, the content management unit 13 acquires content information related to the audio information from the audio output request source application.
- the mediation unit 15 performs mediation such as schedule adjustment of audio output in consideration of the concept of time of content information and the information value of audio information. Detailed description of the processing executed by the arbitration unit 15 will be described later.
- the output unit 16 causes a predetermined audio output device 30a, 30b to output an audio output signal based on the audio information output as the arbitration result by the arbitration unit 15, or based on image information (display content) output as the arbitration result. This is an interface for outputting an image output signal to a predetermined image output device 30c.
- an information output event A has occurred in an application executed by an in-vehicle ECU 20 (S100).
- the control unit 12 performs processing related to image output arbitration (display arbitration processing) in S101. Do.
- This display mediation processing may be executed, for example, in the same manner as the patent document (Japanese Patent Laid-Open No. 2012-190440) disclosed by the applicant of the present application. Therefore, a detailed description regarding this process is omitted.
- the control unit 12 When the information output event A includes the image output event A and the audio output event A, the control unit 12 describes the result of the display mediation processing (display mediation result) in the content information of the audio information corresponding to the information output event A. To do.
- the display mediation result information indicating whether or not the display content corresponding to the audio output event (audio information) is output from the image output device 30c, the size of the display area of the display content, the above display form, etc. Is described.
- the control unit 12 receives an audio output request A for outputting audio information related to the audio output event A from the request source application via the input / output interface 11 (S102).
- the control unit 12 does not output audio information based on another audio output request, and indicates that audio information based on another audio output request is not stored in an output standby buffer described later.
- output of audio information related to the audio output request A is started via the output unit 16.
- the control unit 12 performs a sound mediation process (S110) described later.
- control unit 12 receives an audio output request B for outputting audio information related to the audio output event B from the requesting application via the input / output interface 11 (S108).
- the audio mediation process is executed for the first audio output request A and the second audio output request B.
- the audio mediation process is executed by the mediation unit 15 of the control unit 12.
- the detailed procedure of the voice arbitration process will be described later.
- the control unit 12 performs audio output based on the result of the audio arbitration process.
- the audio information is output via the output unit 16.
- the audio information is output via the output unit 16 in the schedule order adjusted by the arbitration unit 15. After outputting the audio information, the control unit 12 ends this process.
- the arbitration unit 15 acquires the significant information end time of the previous audio information from the content information regarding the previous audio output request A acquired by the content management unit 13. In S ⁇ b> 202, the arbitration unit 15 acquires the delay time limit of the subsequent audio information from the content information regarding the subsequent audio output request B acquired by the content management unit 13.
- the arbitrating unit 15 compares the significant information end time of the preceding voice information with the delay possible deadline of the subsequent voice information, and branches the process according to the time relationship. For example, when the significant information end time of the preceding audio information indicated by the content information related to the audio output request A corresponds to the content length, the significant information end time used for the comparison is the output start time of the preceding audio information + content. Corresponds to the length.
- the arbitration unit 15 proceeds to S206.
- the arbitration unit 15 proceeds to S210.
- the arbitration unit 15 stores the output data of the subsequent audio information in an output standby buffer provided in a predetermined area of the memory.
- the output standby buffer is a memory area for temporarily storing audio information to be output after audio information to be output with priority.
- the output standby buffer is used to defer output of audio information to be output later until output of audio information to be output with priority is completed.
- the arbitration unit 15 sets the output data of the first audio information as an output candidate, and ends this process.
- the arbitrating unit 15 acquires the audio information value A of the preceding audio information from the content information related to the preceding audio output request A, and further determines the subsequent audio from the content information related to the subsequent audio output request B.
- the voice information value B of information is acquired.
- the control unit 12 performs processing related to the resetting of the voice information value (hereinafter referred to as “value resetting process”) for the voice information value A of the preceding voice information and the voice information value B of the subsequent voice information. ”). The detailed procedure of this value resetting process will be described later.
- the arbitration unit 15 compares the voice information value A of the first voice information with the voice information value B of the second voice information based on the result of the value resetting process, The voice information is determined as “priority”, and the voice information having a lower value is determined as “non-priority”. When the voice information values A and B of the first voice information and the second voice information are equal, the first voice information is “priority” and the second voice information is “non-priority”. Further, when other audio information is stored in the output standby buffer, the processing of S200 to S214 is performed using the audio information as the previous audio information, and the priority ranking is performed on all the audio information.
- the arbitration unit 15 acquires the life of the non-prioritized voice information from the content information related to the non-prioritized voice information. In S218, the arbitration unit 15 acquires the significant information end time of the priority audio information from the content information related to the priority audio information.
- the arbitration unit 15 compares the significant information end time of the priority audio information with the life of the non-priority audio information, and branches the process according to the time context.
- the arbitration unit 15 proceeds to S222.
- the arbitration unit 15 proceeds to S224.
- the arbitrating unit 15 stores the non-prioritized audio information output data in the output standby buffer. In S224, the arbitrating unit 15 rejects the non-prioritized voice information output request. In next S226, the arbitration unit 15 sets the output data of the priority audio information as an output candidate, and ends this process.
- the arbitration unit 15 acquires the display mediation result of the target voice information from the content information regarding the target voice information (hereinafter referred to as “target voice information”).
- the arbitration unit 15 determines whether or not the display content corresponding to the target audio information is output from the image output device 30c based on the acquired display arbitration result, and according to the determination result. Branch processing.
- the arbitration unit 15 proceeds to S304.
- FIG. 6A illustrates an image linked with the vehicle periphery monitoring system. For example, it is assumed that the image illustrated in FIG. 6A is displayed when the user is informed of the presence of a pedestrian by sound that is linked to the system.
- the arbitrating unit 15 acquires the display interlocking degree of the target voice information from the content information related to the target voice information.
- the arbitration unit 15 branches the process according to the acquired display interlocking degree.
- the display linkage degree of the target audio information is 1 (that is, when the connection between the target audio information and the display content is strong)
- the arbitration unit 15 proceeds to S307.
- the display interlocking degree of the target audio information is 0 (that is, when the connection is weak)
- the arbitration unit 15 proceeds to S314.
- the arbitration unit 15 sets a coefficient K1 (where K1> 1) for increasing the information value of the target voice information.
- the arbitration unit 15 specifies the display form of the display content corresponding to the target audio information based on the display arbitration result acquired in S300, and branches the process according to the display form.
- the arbitrating unit 15 proceeds to S309.
- the arbitrating unit 15 proceeds to S310.
- Examples of a display form with a large amount of display information or an emphasized display form include a display form in which display content is output by both characters and icons.
- examples of a display form with a small amount of display information or a display form that is not emphasized include a display form in which a display content is output by only one of a character and an icon.
- the arbitrating unit 15 increases the coefficient K1 set in S307 by a predetermined ratio.
- the arbitrating unit 15 reduces the coefficient K1 set in S307 by a predetermined ratio within a range where the resulting coefficient K1 is greater than 1.
- the arbitration unit 15 resets the voice information value by multiplying the voice information value of the target voice information by the coefficient K1. When the voice information value is reset, this process ends.
- the arbitration unit 15 sets a coefficient K2 (where 0 ⁇ K2 ⁇ 1) for reducing the information value of the target voice information.
- the arbitration unit 15 specifies the display form of the display content corresponding to the target audio information based on the display arbitration result acquired in S300, and branches the process according to the display form.
- the arbitrating unit 15 proceeds to S318.
- the arbitrating unit 15 proceeds to S320.
- the arbitrating unit 15 decreases the coefficient K2 set in S314 by a predetermined ratio.
- the arbitrating unit 15 increases the coefficient K2 set in S314 by a predetermined ratio within a range where the resultant coefficient K2 does not exceed 1.
- the arbitrating unit 15 resets the voice information value by multiplying the voice information value of the target voice information by the coefficient K2. When the voice information value is reset, this process ends.
- the arbitration unit 15 does not multiply the audio information value of the target audio information by a coefficient, and the audio defined as the default value.
- the information value is set as it is, and this processing ends.
- a specific execution example 1 of the above-described voice arbitration process (FIG. 4) will be described with reference to FIG.
- the voice information A related to the route guidance includes a guidance voice such as “Turn right about 300m ahead”.
- the voice information B related to the fee guidance includes, for example, a guidance voice saying “The fee is 3200 yen”.
- the significant information end time indicated by the content information regarding the audio information A is t2.
- the significant information end time t2 referred to here corresponds to the output start time t0 of the audio information A + the content length La.
- the significant information end time t2 is the event occurrence time of the voice output request A + the delay deadline. Is the time length corresponding to + and the content length La.
- an audio output event related to audio information B occurs at time t1 during output of audio information A, as shown in part A of FIG. At this time, it is assumed that the delay possible time limit indicated by the content information regarding the audio information B is t3.
- the delay time limit t3 of the later audio information B is later than the significant information end time t2 of the earlier audio information A.
- the output of the subsequent audio information B is temporarily held, and the output of the audio information B is performed after the output of the audio information A is finished.
- the output of the first audio information A is started.
- the significant information end time indicated by the content information regarding the audio information A is t13.
- the significant information end time t13 referred to here corresponds to the output start time t10 of the audio information A + the content length La.
- an audio output event related to the audio information B occurs at time t11 during output of the audio information A.
- the delay possible time limit indicated by the content information regarding the audio information B is t12.
- the delay time limit t12 of the subsequent audio information B comes earlier than the significant information end time t13 of the previous audio information A.
- the voice information value indicated by the content information related to the voice information A is compared with the voice information value indicated by the content information related to the voice information B, as illustrated in part B of FIG.
- the voice information value of the subsequent voice information B (value 10) is more than the voice information value of the first voice information A (value 9). Assume a large case.
- the voice information value is reset (value resetting process) for the voice information B.
- the audio information value (value 10) of the audio information B is multiplied by the coefficient K2 (for example, 0.5).
- the voice information value of the voice information B the voice information value (value 5) linked with the display mediation is newly set, and the content information is rewritten.
- the voice information value (value 9) of the voice information A does not change.
- the voice information value (value 9) indicated by the content information related to the voice information A is greater than the voice information value (value 5) indicated by the content information related to the voice information B.
- the first voice information A is determined as “priority”
- the second voice information B is determined as “non-priority”.
- the audio information B is rejected and the output of the audio information A Only will be implemented (continued). At this time, the audio information B is not output as audio, but the corresponding display content (see FIG. 6B) is output as an image.
- the voice information value (value 10) of the voice information B does not change.
- the voice information value (value 10) indicated by the content information related to the voice information B is greater than the voice information value (value 9) indicated by the content information related to the voice information A. .
- the coefficient K1 (for example, 1.5) is the audio information value of the audio information B. (Value 10) is multiplied.
- the voice information value of the voice information B the voice information value (value 15) linked with the display mediation is newly set, and the content information is rewritten.
- the voice information value (value 15) indicated by the content information regarding the voice information B is larger than the voice information value (value 9) indicated by the content information regarding the voice information A.
- the first voice information A is determined as “non-priority” and the second voice information B is determined as “priority”.
- the output of the audio information A is interrupted, and the audio information B is output by interruption.
- the timing at which the output of the audio information A is interrupted may be set to the interruptable time TI indicated by the content information related to the audio information A.
- an interrupt notification sound may be output immediately before audio information.
- voice information satisfies the relational expression “interruptible time TI of voice information A (non-priority) + content length LI of interrupt notification sound ⁇ delayable time limit t12 of voice information B (priority)”. Is output on the condition.
- the content length LI of the interrupt notification sound can be defined in advance for the system of the output control device 1.
- interrupt sound insertion process executed by the control unit 12 in S112 to insert the interrupt notification sound.
- the control unit 12 interrupts the output of the preceding audio information A corresponding to the audio output request A based on the result of the audio arbitration process, and outputs the subsequent audio information B corresponding to the audio output request B as an interrupt (S400; YES)
- the process proceeds to S401.
- the subsequent audio information B is output after the output of the first audio information is completed (S400; NO)
- the process proceeds to S405.
- the control unit 12 calculates a notification sound end time TI + LI that is a time obtained by adding the content length LI of the interrupt notification sound to the interruptable time TI of the voice information A (non-priority).
- a notification sound end time TI + LI that is a time obtained by adding the content length LI of the interrupt notification sound to the interruptable time TI of the voice information A (non-priority).
- the control unit 12 outputs an interrupt notification sound after interrupting the output of the voice information A at the interruptable time TI. After completing the output of the interrupt notification sound, the audio information B is output. In S404, after interrupting the output of the voice information A at the interruptible time TI, the control unit 12 outputs the voice information B without outputting the interrupt notification sound. In addition, in S405, after the output of the audio information A is completed, the control unit 12 outputs the audio information B without outputting the interrupt notification sound. Thereafter, the interrupt sound insertion process is terminated. The interrupt can be appropriately notified to the user by this interrupt sound insertion processing.
- the voice information A regarding the weather guidance includes, for example, a guidance voice such as “I will inform you of the weather today.
- the voice information B related to the fee guidance includes, for example, a guidance voice saying “The fee is 3200 yen”.
- the output of the voice information A is started as a start.
- the significant information end time indicated by the content information regarding the audio information A is t24.
- the significant information end time t24 referred to here corresponds to the output start time t20 of the audio information A + the content length Lc.
- an audio output event related to the audio information B occurs at time t21 during output of the audio information A.
- the delay possible time limit indicated by the content information regarding the audio information B is t22.
- the delay time limit t22 of the subsequent audio information B arrives earlier than the significant information end time t24 of the previous audio information A.
- arbitration is performed by comparing the voice information value indicated by the content information related to the voice information A and the voice information value indicated by the content information related to the voice information B.
- the description is made on the assumption that the voice information value of the voice information B is larger than the voice information value of the voice information A.
- the voice information A is determined as “non-priority” and the voice information B is determined as “priority”. Further, the significant information end time t23 of the voice information B is compared with the life t25 indicated by the content information related to the voice information A, and it is determined whether or not the voice information A can be on standby. Note that the significant information end time t23 of the audio information B is the delay time limit t22 of the audio information B + the content length Lb of the audio information B.
- the voice information A is stored in the output standby buffer on condition that the relational expression “significant information end time t23 of voice information B (priority) ⁇ life t25 of voice information A (non-priority)” is satisfied. .
- the output of the voice information A is interrupted and the voice information B is output by interruption as illustrated in part B of FIG.
- the timing at which the output of the audio information A is interrupted corresponds to the interruptable time TI indicated by the content information related to the audio information A.
- an interrupt notification sound is output before the audio information B is output.
- the output of the audio information A stored in the output standby buffer is resumed.
- the output of the voice information A is resumed from the continuation of the point interrupted at the interruptible time TI in the voice information A.
- a blank period of silence may be inserted immediately before.
- the blank period may be inserted on condition that the relational expression “life 25 of voice information A (non-priority)> significant information end time t23 of voice information B (priority) + content length LN of blank period” is satisfied. .
- the content length LN of the blank period can be defined in advance for the system of the output control apparatus 1.
- blank period insertion process the details of the process executed by the control unit 12 in S112 (hereinafter referred to as “blank period insertion process”) will be described with reference to FIG. Is executed on condition that the audio output by the interruption is started.
- the control unit 12 adds the blank period content length LN to the significant information end time t23 of the audio information B (priority) from which the audio output by the interrupt is started. Is calculated (S500). Thereafter, it is determined whether or not the blank period end time comes before the life t25 of the audio information A (non-priority) output after the interruption (S501). When the blank period end time comes before the life t25 of the voice information A (non-priority) (S501; YES), the process proceeds to S502. On the other hand, in other cases (S501; NO), the process proceeds to S503.
- the control unit 12 ends the voice output of the voice information B at the timing corresponding to the significant information end time t23 of the voice information B and switches to the silent state. This switching time is the start time of the blank period. After the blank period ends, the output of the audio information A is resumed. In S503, the control unit 12 ends the sound output of the sound information B at a timing corresponding to the significant information end time t23 of the sound information B, and restarts the output of the sound information A immediately after the end. Thereafter, the blank period insertion processing is terminated. By this blank period insertion processing, a blank period can be appropriately formed.
- the output control device 1 causes the audio output devices 30a and 30b to output audio information in response to audio output requests supplied from the plurality of in-vehicle ECUs 20 that request output of audio information.
- the control unit 12 compares the voice information value preset in each voice information corresponding to these voice output requests, and compares the higher voice information value.
- the audio information is preferentially output from the audio output device.
- the control unit 12 executes value resetting processing.
- the value resetting process it is determined whether or not display content corresponding to each audio information is output from the image output device 30c (S300 to S302). Depending on the determination result, the audio information value of the audio information corresponding to the display content is variably set (S304 to S324).
- the audio information value is variably set in consideration of the mediation result of the display content corresponding to each audio information. For this reason, it is possible to preferentially output audio information that is optimal from the viewpoint of audio output and information output by image display, compared to the case where the levels of audio information values set by default are simply compared.
- the output control apparatus 1 compared with the case where audio
- the schedule for outputting audio information can be adjusted in a more flexible manner. Therefore, the voice information value of the voice information can be optimized and the voice output can be flexibly adjusted.
- the value resetting process when the display content is output from the image output device 30c, if the connection with the display content is strong, the audio information value of the audio information corresponding to the display content is increased. As a result, audio output is more easily performed as information is displayed as an image. Therefore, it is possible to increase the opportunities to emphasize important information and present it to the user.
- the strength of “connection” is defined in advance by the information of the display interlocking degree.
- the voice information value is increased or decreased based on information (the degree of display interlocking) indicating the strength of “connection” determined in advance.
- the audio information value of the audio information corresponding to the display content depends on the display form of the display content in the image output device 30c. Variable setting.
- the connection between the audio information and the corresponding display content is weak, the decrease in the audio information value of the audio information is reduced if the display information amount of the corresponding display content is small, and if the display information amount of the display content is large, Increased.
- the amount of information presented to the user tends to be constant with respect to a plurality of pieces of audio information that are output candidates at the same time. Therefore, a lot of information can be presented to the user fairly.
- the increase in the audio information value of the audio information is reduced if the display information amount of the display content is small, and is increased if the display information amount of the display content is large.
- information with a larger amount of display information in the display content is more easily output. Therefore, it is possible to increase the opportunities to emphasize important information to the user and maximize it.
- the audio information of the audio information corresponding to the display content according to the display form of the display content in the image output device 30c is variably set.
- details of the value resetting process are not limited to this.
- the audio information value of the audio information corresponding to the display content is displayed, and the display form of the display content output at that time in the image output device 30c It may be increased or decreased depending on.
- the voice information value is increased, the information value of other voice information on which an image is displayed is relatively reduced. Therefore, a large amount of information is efficiently presented to the user for the reasons already described. It leads to.
- the voice information value is decreased here, the information value of other voice information on which image display is performed is relatively increased. Therefore, for the reasons already described, the important information is emphasized by the user. Will lead to more opportunities to present.
- the output control device 1 does not necessarily need to be configured to perform display mediation processing and output control to the image output device 30c, and may be configured as at least an audio output control device.
- the output control apparatus 1 of the said embodiment is comprised so that an information output request
- the output control device 1 may be configured such that a plurality of applications installed in the output control device notify and receive an information output request.
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Navigation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
このプログラムは、1ないし複数のコンピュータに組み込まれることにより、本発明の一局面の音声出力制御装置によって奏する効果と同等の効果を奏し得る。なお、本発明の一局面のプログラムは、コンピュータに組み込まれるROMやフラッシュメモリ等に記憶され、これらROMやフラッシュメモリ等からコンピュータにロードされて用いられてもよいし、ネットワークを介してコンピュータにロードされて用いられてもよい。
本発明は、下記の実施形態によって何ら限定して解釈されない。下記の実施形態の一部を省略した態様も本発明の実施形態である。特許請求の範囲に記載した文言のみによって特定される発明の本質を逸脱しない限度において考え得るあらゆる態様も本発明の実施形態である。下記の実施形態の説明で用いる符号を特許請求の範囲にも適宜使用しているが、各請求項に係る発明の理解を容易にする目的で使用しており、各請求項に係る発明の技術的範囲を限定する意図ではない。
図1に示すように、出力制御装置1は、スピーカ等の音声出力装置30a,30b、ディスプレイ等の画像出力装置30c、および、複数の車載ECU20等の車載機器と接続され、車両用音声表示出力システムを構成している。音声出力装置30a,30bは、車室内の各所に設けられる。画像出力装置30cは、車室内においてドライバ(ユーザ)が視認可能な位置に設けられる。出力制御装置1は、車載ECU20で動作するアプリケーションによる出力要求に応じて音声情報および画像情報の出力を行う車載器である。
出力制御装置1の制御部12が実行するメイン処理の手順について、図3のフローチャートを参照しながら説明する。この処理は、車載ECU20が実行するアプリケーションにおいて情報出力イベントが発生したときに実行される。
制御部12の調停部15が実行する音声調停処理の手順について、図4のフローチャートを参照しながら説明する。この処理は、上述のメイン処理(図3参照)のS110において実行される。
制御部12の調停部15が実行する価値再設定処理の手順について、図5のフローチャートを参照しながら説明する。この処理は、上述の音声調停処理(図4参照)のS212において実行される。本処理は、先発の音声情報の音声情報価値Aと後発の音声情報の音声情報価値Bとの両方を対象に行われる。
上述の音声調停処理(図4)の具体的な実行例1について、図7を参照しながら説明する。ここでは、経路案内に関する音声情報Aと、料金案内に関する音声情報Bとの2つの音声出力要求が重なった事例を想定する。経路案内に関する音声情報Aは、例えば「およそ300m先、右折です。」との案内音声を含む。料金案内に関する音声情報Bは、例えば「料金は3200円です。」との案内音声を含む。
上述の音声調停処理の具体的な実行例2について、図8を参照しながら説明する。この説明は、価値再設定処理(図5参照)に関する説明を含む。本事例の想定は上記実行例1と略同じである。但し、本事例では、音声情報Aに対応する表示コンテントが存在しない。このため、音声情報Aに関するコンテント情報は、表示調停結果および表示連動度を有していない。一方、音声情報Bに関しては、対応する表示コンテントが存在する。このため、音声情報Bに関するコンテント情報は、表示調停結果および表示連動度を有している。さらに、音声情報Bは、対応する表示コンテントとの結び付きが弱いため、表示連動度は0に設定されているものとする。
制御部12は、音声調停処理の結果に基づき、音声出力要求Aに対応する先発の音声情報Aの出力を中断して、音声出力要求Bに対応する後発の音声情報Bを割込みで出力する場合(S400;YES)、S401に進む。一方、先発の音声情報の出力が完了した後に、後発の音声情報Bを出力する場合(S400;NO)、S405に進む。
上述の音声調停処理の具体的な実行例3について、図11を参照しながら説明する。ここでは、天気案内に関する音声情報Aと、料金案内に関する音声情報Bとの2つの音声出力要求が重なった事例を想定する。天気案内に関する音声情報Aは、例えば「○月△日、本日の天気をお知らせします。全国的に晴れです。」との案内音声を含む。料金案内に関する音声情報Bは、例えば「料金は3200円です。」との案内音声を含む。
以上説明したように、出力制御装置1は、音声情報の出力を要求する複数の車載ECU20から供給される音声出力要求に応じて、音声情報を音声出力装置30a,30bから出力させる。制御部12は、複数の音声出力要求が供給された場合に、これらの音声出力要求に対応する各音声情報に予め設定された音声情報価値の高低を比較し、その音声情報価値の高い方の音声情報を優先的に音声出力装置から出力させる。このとき、制御部12は、価値再設定処理を実行する。
以上、本発明の実施形態について説明したが、本発明は上記実施形態に限定されるものではなく、本発明の要旨を逸脱しない範囲において、様々な態様にて実施することが可能である。
Claims (7)
- 音声情報の出力を要求する複数の出力要求部(20)から供給される出力要求に応じて、音声情報を音声出力装置(30a,30b)から出力させる制御部であって、複数の出力要求に対応する各音声情報に予め設定された情報価値の高低を比較し、該情報価値の高い方の音声情報を優先的に前記音声出力装置から出力させる制御部(12,S110)を備え、
前記制御部は、
前記各音声情報に対応するコンテントが表示装置から出力されるか否かを判定する判定部(15,S300~S302)と、
前記判定部による判定結果に応じて、前記情報価値を可変設定する価値可変設定部(15,S304~S324)と、
を有する音声出力制御装置(1)。 - 前記価値可変設定部は、前記コンテントが表示装置から出力される場合に、該コンテントに対応する音声情報の情報価値を減少させる請求項1に記載の音声出力制御装置。
- 前記価値可変設定部は、前記コンテントが表示装置から出力される場合に、該コンテントに対応する音声情報の情報価値を増大させる請求項1に記載の音声出力制御装置。
- 前記価値可変設定部は、前記コンテントが表示装置から出力される場合に、前記表示装置における該コンテントの表示形態に応じて、該コンテントに対応する音声情報の情報価値を可変設定する請求項1ないし請求項3のいずれか1項に記載の音声出力制御装置。
- 前記価値可変設定部は、前記コンテントが前記表示装置から出力される場合であって、前記コンテントと対応する音声情報との間の結び付きが強い場合には、前記コンテントに対応する音声情報の情報価値を増大させ、前記コンテントが前記表示装置から出力される場合であって、前記コンテントと前記対応する音声情報との間の結び付きが弱い場合には、前記コンテントに対応する音声情報の情報価値を減少させるように構成される請求項1ないし請求項3のいずれか1項に記載の音声出力制御装置。
- コンピュータを、請求項1ないし請求項5のいずれか1項に記載の前記制御部として機能させるためのプログラム。
- コンピュータ読取可能な記録媒体(14)であって、
コンピュータ(12)を、
音声情報の出力を要求する複数の出力要求部(20)から供給される出力要求に応じて、音声情報を音声出力装置(30a,30b)から出力させる制御部であって、複数の出力要求に対応する各音声情報に予め設定された情報価値の高低を比較し、該情報価値の高い方の音声情報を優先的に前記音声出力装置から出力させる制御部と、
前記各音声情報に対応するコンテントが表示装置から出力されるか否かを判定する判定部と、
前記判定部による判定結果に応じて、前記情報価値を可変設定する価値可変設定部
として機能させるためのプログラムを記録した記録媒体。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015536504A JP6037026B2 (ja) | 2013-09-11 | 2014-08-18 | 音声出力制御装置、プログラムおよび記録媒体 |
US14/917,439 US10163435B2 (en) | 2013-09-11 | 2014-08-18 | Voice output control device, voice output control method, and recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013188383 | 2013-09-11 | ||
JP2013-188383 | 2013-09-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015037396A1 true WO2015037396A1 (ja) | 2015-03-19 |
Family
ID=52665514
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/071582 WO2015037396A1 (ja) | 2013-09-11 | 2014-08-18 | 音声出力制御装置、プログラムおよび記録媒体 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10163435B2 (ja) |
JP (1) | JP6037026B2 (ja) |
WO (1) | WO2015037396A1 (ja) |
Cited By (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017147075A1 (en) * | 2016-02-22 | 2017-08-31 | Sonos, Inc. | Audio response playback |
US9794720B1 (en) | 2016-09-22 | 2017-10-17 | Sonos, Inc. | Acoustic position measurement |
US9811314B2 (en) | 2016-02-22 | 2017-11-07 | Sonos, Inc. | Metadata exchange involving a networked playback system and a networked microphone system |
US9942678B1 (en) | 2016-09-27 | 2018-04-10 | Sonos, Inc. | Audio playback settings for voice interaction |
US9947316B2 (en) | 2016-02-22 | 2018-04-17 | Sonos, Inc. | Voice control of a media playback system |
US9965247B2 (en) | 2016-02-22 | 2018-05-08 | Sonos, Inc. | Voice controlled media playback system based on user profile |
US9978390B2 (en) | 2016-06-09 | 2018-05-22 | Sonos, Inc. | Dynamic player selection for audio signal processing |
US10021503B2 (en) | 2016-08-05 | 2018-07-10 | Sonos, Inc. | Determining direction of networked microphone device relative to audio playback device |
US10051366B1 (en) | 2017-09-28 | 2018-08-14 | Sonos, Inc. | Three-dimensional beam forming with a microphone array |
US10075793B2 (en) | 2016-09-30 | 2018-09-11 | Sonos, Inc. | Multi-orientation playback device microphones |
US10095470B2 (en) | 2016-02-22 | 2018-10-09 | Sonos, Inc. | Audio response playback |
US10097939B2 (en) | 2016-02-22 | 2018-10-09 | Sonos, Inc. | Compensation for speaker nonlinearities |
US10115400B2 (en) | 2016-08-05 | 2018-10-30 | Sonos, Inc. | Multiple voice services |
US10134399B2 (en) | 2016-07-15 | 2018-11-20 | Sonos, Inc. | Contextualization of voice inputs |
US10152969B2 (en) | 2016-07-15 | 2018-12-11 | Sonos, Inc. | Voice detection by multiple devices |
US10181323B2 (en) | 2016-10-19 | 2019-01-15 | Sonos, Inc. | Arbitration-based voice recognition |
JPWO2017175432A1 (ja) * | 2016-04-05 | 2019-03-22 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
US10264030B2 (en) | 2016-02-22 | 2019-04-16 | Sonos, Inc. | Networked microphone device control |
US10445057B2 (en) | 2017-09-08 | 2019-10-15 | Sonos, Inc. | Dynamic computation of system response volume |
US10446165B2 (en) | 2017-09-27 | 2019-10-15 | Sonos, Inc. | Robust short-time fourier transform acoustic echo cancellation during audio playback |
US10466962B2 (en) | 2017-09-29 | 2019-11-05 | Sonos, Inc. | Media playback system with voice assistance |
US10475449B2 (en) | 2017-08-07 | 2019-11-12 | Sonos, Inc. | Wake-word detection suppression |
US10482868B2 (en) | 2017-09-28 | 2019-11-19 | Sonos, Inc. | Multi-channel acoustic echo cancellation |
US10573321B1 (en) | 2018-09-25 | 2020-02-25 | Sonos, Inc. | Voice detection optimization based on selected voice assistant service |
US10587430B1 (en) | 2018-09-14 | 2020-03-10 | Sonos, Inc. | Networked devices, systems, and methods for associating playback devices based on sound codes |
US10586540B1 (en) | 2019-06-12 | 2020-03-10 | Sonos, Inc. | Network microphone device with command keyword conditioning |
JP2020042699A (ja) * | 2018-09-13 | 2020-03-19 | 株式会社ユピテル | システムおよびプログラム等 |
US10602268B1 (en) | 2018-12-20 | 2020-03-24 | Sonos, Inc. | Optimization of network microphone devices using noise classification |
US10621981B2 (en) | 2017-09-28 | 2020-04-14 | Sonos, Inc. | Tone interference cancellation |
US10681460B2 (en) | 2018-06-28 | 2020-06-09 | Sonos, Inc. | Systems and methods for associating playback devices with voice assistant services |
US10692518B2 (en) | 2018-09-29 | 2020-06-23 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection via multiple network microphone devices |
US10797667B2 (en) | 2018-08-28 | 2020-10-06 | Sonos, Inc. | Audio notifications |
US10818290B2 (en) | 2017-12-11 | 2020-10-27 | Sonos, Inc. | Home graph |
US10847178B2 (en) | 2018-05-18 | 2020-11-24 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection |
US10867604B2 (en) | 2019-02-08 | 2020-12-15 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing |
US10871943B1 (en) | 2019-07-31 | 2020-12-22 | Sonos, Inc. | Noise classification for event detection |
US10878811B2 (en) | 2018-09-14 | 2020-12-29 | Sonos, Inc. | Networked devices, systems, and methods for intelligently deactivating wake-word engines |
US10880650B2 (en) | 2017-12-10 | 2020-12-29 | Sonos, Inc. | Network microphone devices with automatic do not disturb actuation capabilities |
US10959029B2 (en) | 2018-05-25 | 2021-03-23 | Sonos, Inc. | Determining and adapting to changes in microphone performance of playback devices |
US11024331B2 (en) | 2018-09-21 | 2021-06-01 | Sonos, Inc. | Voice detection optimization using sound metadata |
US11076035B2 (en) | 2018-08-28 | 2021-07-27 | Sonos, Inc. | Do not disturb feature for audio notifications |
US11100923B2 (en) | 2018-09-28 | 2021-08-24 | Sonos, Inc. | Systems and methods for selective wake word detection using neural network models |
US11120794B2 (en) | 2019-05-03 | 2021-09-14 | Sonos, Inc. | Voice assistant persistence across multiple network microphone devices |
US11132989B2 (en) | 2018-12-13 | 2021-09-28 | Sonos, Inc. | Networked microphone devices, systems, and methods of localized arbitration |
US11138969B2 (en) | 2019-07-31 | 2021-10-05 | Sonos, Inc. | Locally distributed keyword detection |
US11138975B2 (en) | 2019-07-31 | 2021-10-05 | Sonos, Inc. | Locally distributed keyword detection |
US11175880B2 (en) | 2018-05-10 | 2021-11-16 | Sonos, Inc. | Systems and methods for voice-assisted media content selection |
US11183183B2 (en) | 2018-12-07 | 2021-11-23 | Sonos, Inc. | Systems and methods of operating media playback systems having multiple voice assistant services |
US11183181B2 (en) | 2017-03-27 | 2021-11-23 | Sonos, Inc. | Systems and methods of multiple voice services |
US11189286B2 (en) | 2019-10-22 | 2021-11-30 | Sonos, Inc. | VAS toggle based on device orientation |
US11200889B2 (en) | 2018-11-15 | 2021-12-14 | Sonos, Inc. | Dilated convolutions and gating for efficient keyword spotting |
US11200894B2 (en) | 2019-06-12 | 2021-12-14 | Sonos, Inc. | Network microphone device with command keyword eventing |
US11200900B2 (en) | 2019-12-20 | 2021-12-14 | Sonos, Inc. | Offline voice control |
US11215473B2 (en) | 2018-11-27 | 2022-01-04 | Toyota Jidosha Kabushiki Kaisha | Driving support device, driving support Method, and non-transitory recording medium in which program is stored |
US11308962B2 (en) | 2020-05-20 | 2022-04-19 | Sonos, Inc. | Input detection windowing |
US11308958B2 (en) | 2020-02-07 | 2022-04-19 | Sonos, Inc. | Localized wakeword verification |
US11315556B2 (en) | 2019-02-08 | 2022-04-26 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification |
US11343614B2 (en) | 2018-01-31 | 2022-05-24 | Sonos, Inc. | Device designation of playback and network microphone device arrangements |
US11361756B2 (en) | 2019-06-12 | 2022-06-14 | Sonos, Inc. | Conditional wake word eventing based on environment |
US11482224B2 (en) | 2020-05-20 | 2022-10-25 | Sonos, Inc. | Command keywords with input detection windowing |
US11551700B2 (en) | 2021-01-25 | 2023-01-10 | Sonos, Inc. | Systems and methods for power-efficient keyword detection |
US11556307B2 (en) | 2020-01-31 | 2023-01-17 | Sonos, Inc. | Local voice data processing |
US11562740B2 (en) | 2020-01-07 | 2023-01-24 | Sonos, Inc. | Voice verification for media playback |
US11698771B2 (en) | 2020-08-25 | 2023-07-11 | Sonos, Inc. | Vocal guidance engines for playback devices |
US11727919B2 (en) | 2020-05-20 | 2023-08-15 | Sonos, Inc. | Memory allocation for keyword spotting engines |
US11899519B2 (en) | 2018-10-23 | 2024-02-13 | Sonos, Inc. | Multiple stage network microphone device with reduced power consumption and processing load |
US11984123B2 (en) | 2020-11-12 | 2024-05-14 | Sonos, Inc. | Network device interaction by range |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180302454A1 (en) * | 2017-04-05 | 2018-10-18 | Interlock Concepts Inc. | Audio visual integration device |
US20200111475A1 (en) * | 2017-05-16 | 2020-04-09 | Sony Corporation | Information processing apparatus and information processing method |
US10708268B2 (en) * | 2017-07-31 | 2020-07-07 | Airwatch, Llc | Managing voice applications within a digital workspace |
CN115529832A (zh) * | 2021-04-08 | 2022-12-27 | 松下知识产权经营株式会社 | 控制方法、控制装置及程序 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008201217A (ja) * | 2007-02-19 | 2008-09-04 | Nissan Motor Co Ltd | 情報提供装置、情報提供方法及び情報提供システム |
WO2012101768A1 (ja) * | 2011-01-26 | 2012-08-02 | 三菱電機株式会社 | エレベータの案内装置 |
JP2013029977A (ja) * | 2011-07-28 | 2013-02-07 | Alpine Electronics Inc | 割り込み制御装置および割り込み制御方法 |
JP2013083607A (ja) * | 2011-10-12 | 2013-05-09 | Alpine Electronics Inc | 電子装置、出力制御方法および出力制御プログラム |
JP2013160778A (ja) * | 2012-02-01 | 2013-08-19 | Suzuki Motor Corp | 車両用制御装置 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2058497B (en) * | 1979-08-31 | 1984-02-29 | Nissan Motor | Voice warning system with volume control |
JP3738923B2 (ja) | 1996-09-30 | 2006-01-25 | マツダ株式会社 | ナビゲーション装置 |
JP3703050B2 (ja) | 1996-09-30 | 2005-10-05 | マツダ株式会社 | ナビゲーション装置 |
JP3703043B2 (ja) | 1996-09-30 | 2005-10-05 | マツダ株式会社 | ナビゲーション装置 |
JP3485049B2 (ja) | 1999-11-30 | 2004-01-13 | 株式会社デンソー | 施設ガイド機能を有する電子機器 |
JP4700904B2 (ja) * | 2003-12-08 | 2011-06-15 | パイオニア株式会社 | 情報処理装置及び走行情報音声案内方法 |
JP2007057844A (ja) * | 2005-08-24 | 2007-03-08 | Fujitsu Ltd | 音声認識システムおよび音声処理システム |
JP5119587B2 (ja) * | 2005-10-31 | 2013-01-16 | 株式会社デンソー | 車両用表示装置 |
JP4471128B2 (ja) * | 2006-11-22 | 2010-06-02 | セイコーエプソン株式会社 | 半導体集積回路装置、電子機器 |
JP2010521709A (ja) * | 2007-03-21 | 2010-06-24 | トムトム インターナショナル ベスローテン フエンノートシャップ | テキストを音声に変換して配信するための装置及びその方法 |
JP2011091617A (ja) * | 2009-10-22 | 2011-05-06 | Denso Corp | 車両用データ通信装置 |
US9273978B2 (en) * | 2010-01-08 | 2016-03-01 | Blackberry Limited | Methods, device and systems for delivery of navigational notifications |
JP5229379B2 (ja) | 2011-02-21 | 2013-07-03 | 株式会社デンソー | 表示制御装置 |
JP2013171312A (ja) * | 2012-02-17 | 2013-09-02 | Denso Corp | 映像音声制御装置 |
CN104919278B (zh) * | 2013-01-09 | 2017-09-19 | 三菱电机株式会社 | 语音识别装置及显示方法 |
JP6020189B2 (ja) | 2013-01-18 | 2016-11-02 | 株式会社デンソー | 音声出力制御装置 |
US9354074B2 (en) * | 2014-07-17 | 2016-05-31 | Google Inc. | Controlling media output during consecutive navigation interruptions |
-
2014
- 2014-08-18 JP JP2015536504A patent/JP6037026B2/ja active Active
- 2014-08-18 WO PCT/JP2014/071582 patent/WO2015037396A1/ja active Application Filing
- 2014-08-18 US US14/917,439 patent/US10163435B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008201217A (ja) * | 2007-02-19 | 2008-09-04 | Nissan Motor Co Ltd | 情報提供装置、情報提供方法及び情報提供システム |
WO2012101768A1 (ja) * | 2011-01-26 | 2012-08-02 | 三菱電機株式会社 | エレベータの案内装置 |
JP2013029977A (ja) * | 2011-07-28 | 2013-02-07 | Alpine Electronics Inc | 割り込み制御装置および割り込み制御方法 |
JP2013083607A (ja) * | 2011-10-12 | 2013-05-09 | Alpine Electronics Inc | 電子装置、出力制御方法および出力制御プログラム |
JP2013160778A (ja) * | 2012-02-01 | 2013-08-19 | Suzuki Motor Corp | 車両用制御装置 |
Cited By (176)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10225651B2 (en) | 2016-02-22 | 2019-03-05 | Sonos, Inc. | Default playback device designation |
US10970035B2 (en) | 2016-02-22 | 2021-04-06 | Sonos, Inc. | Audio response playback |
US11405430B2 (en) | 2016-02-22 | 2022-08-02 | Sonos, Inc. | Networked microphone device control |
US10847143B2 (en) | 2016-02-22 | 2020-11-24 | Sonos, Inc. | Voice control of a media playback system |
US9820039B2 (en) | 2016-02-22 | 2017-11-14 | Sonos, Inc. | Default playback devices |
US11513763B2 (en) | 2016-02-22 | 2022-11-29 | Sonos, Inc. | Audio response playback |
US9947316B2 (en) | 2016-02-22 | 2018-04-17 | Sonos, Inc. | Voice control of a media playback system |
US9965247B2 (en) | 2016-02-22 | 2018-05-08 | Sonos, Inc. | Voice controlled media playback system based on user profile |
US11514898B2 (en) | 2016-02-22 | 2022-11-29 | Sonos, Inc. | Voice control of a media playback system |
US11212612B2 (en) | 2016-02-22 | 2021-12-28 | Sonos, Inc. | Voice control of a media playback system |
WO2017147075A1 (en) * | 2016-02-22 | 2017-08-31 | Sonos, Inc. | Audio response playback |
US10095470B2 (en) | 2016-02-22 | 2018-10-09 | Sonos, Inc. | Audio response playback |
US10097939B2 (en) | 2016-02-22 | 2018-10-09 | Sonos, Inc. | Compensation for speaker nonlinearities |
US10097919B2 (en) | 2016-02-22 | 2018-10-09 | Sonos, Inc. | Music service selection |
US10142754B2 (en) | 2016-02-22 | 2018-11-27 | Sonos, Inc. | Sensor on moving component of transducer |
US11184704B2 (en) | 2016-02-22 | 2021-11-23 | Sonos, Inc. | Music service selection |
US11983463B2 (en) | 2016-02-22 | 2024-05-14 | Sonos, Inc. | Metadata exchange involving a networked playback system and a networked microphone system |
JP2019509679A (ja) * | 2016-02-22 | 2019-04-04 | ソノズ インコーポレイテッド | オーディオ応答再生 |
US10971139B2 (en) | 2016-02-22 | 2021-04-06 | Sonos, Inc. | Voice control of a media playback system |
US10743101B2 (en) | 2016-02-22 | 2020-08-11 | Sonos, Inc. | Content mixing |
US10764679B2 (en) | 2016-02-22 | 2020-09-01 | Sonos, Inc. | Voice control of a media playback system |
US10212512B2 (en) | 2016-02-22 | 2019-02-19 | Sonos, Inc. | Default playback devices |
US9811314B2 (en) | 2016-02-22 | 2017-11-07 | Sonos, Inc. | Metadata exchange involving a networked playback system and a networked microphone system |
US9772817B2 (en) | 2016-02-22 | 2017-09-26 | Sonos, Inc. | Room-corrected voice detection |
US10555077B2 (en) | 2016-02-22 | 2020-02-04 | Sonos, Inc. | Music service selection |
US10264030B2 (en) | 2016-02-22 | 2019-04-16 | Sonos, Inc. | Networked microphone device control |
US11863593B2 (en) | 2016-02-22 | 2024-01-02 | Sonos, Inc. | Networked microphone device control |
US11832068B2 (en) | 2016-02-22 | 2023-11-28 | Sonos, Inc. | Music service selection |
US11137979B2 (en) | 2016-02-22 | 2021-10-05 | Sonos, Inc. | Metadata exchange involving a networked playback system and a networked microphone system |
US11006214B2 (en) | 2016-02-22 | 2021-05-11 | Sonos, Inc. | Default playback device designation |
US10509626B2 (en) | 2016-02-22 | 2019-12-17 | Sonos, Inc | Handling of loss of pairing between networked devices |
US10365889B2 (en) | 2016-02-22 | 2019-07-30 | Sonos, Inc. | Metadata exchange involving a networked playback system and a networked microphone system |
US10409549B2 (en) | 2016-02-22 | 2019-09-10 | Sonos, Inc. | Audio response playback |
US11726742B2 (en) | 2016-02-22 | 2023-08-15 | Sonos, Inc. | Handling of loss of pairing between networked devices |
US11042355B2 (en) | 2016-02-22 | 2021-06-22 | Sonos, Inc. | Handling of loss of pairing between networked devices |
US11736860B2 (en) | 2016-02-22 | 2023-08-22 | Sonos, Inc. | Voice control of a media playback system |
US11750969B2 (en) | 2016-02-22 | 2023-09-05 | Sonos, Inc. | Default playback device designation |
US10499146B2 (en) | 2016-02-22 | 2019-12-03 | Sonos, Inc. | Voice control of a media playback system |
US11016726B2 (en) | 2016-04-05 | 2021-05-25 | Sony Corporation | Information processing apparatus and information processing method |
JPWO2017175432A1 (ja) * | 2016-04-05 | 2019-03-22 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
US11133018B2 (en) | 2016-06-09 | 2021-09-28 | Sonos, Inc. | Dynamic player selection for audio signal processing |
US10332537B2 (en) | 2016-06-09 | 2019-06-25 | Sonos, Inc. | Dynamic player selection for audio signal processing |
US11545169B2 (en) | 2016-06-09 | 2023-01-03 | Sonos, Inc. | Dynamic player selection for audio signal processing |
US10714115B2 (en) | 2016-06-09 | 2020-07-14 | Sonos, Inc. | Dynamic player selection for audio signal processing |
US9978390B2 (en) | 2016-06-09 | 2018-05-22 | Sonos, Inc. | Dynamic player selection for audio signal processing |
US10152969B2 (en) | 2016-07-15 | 2018-12-11 | Sonos, Inc. | Voice detection by multiple devices |
US11979960B2 (en) | 2016-07-15 | 2024-05-07 | Sonos, Inc. | Contextualization of voice inputs |
US10297256B2 (en) | 2016-07-15 | 2019-05-21 | Sonos, Inc. | Voice detection by multiple devices |
US10593331B2 (en) | 2016-07-15 | 2020-03-17 | Sonos, Inc. | Contextualization of voice inputs |
US10134399B2 (en) | 2016-07-15 | 2018-11-20 | Sonos, Inc. | Contextualization of voice inputs |
US11184969B2 (en) | 2016-07-15 | 2021-11-23 | Sonos, Inc. | Contextualization of voice inputs |
US11664023B2 (en) | 2016-07-15 | 2023-05-30 | Sonos, Inc. | Voice detection by multiple devices |
US10699711B2 (en) | 2016-07-15 | 2020-06-30 | Sonos, Inc. | Voice detection by multiple devices |
US10354658B2 (en) | 2016-08-05 | 2019-07-16 | Sonos, Inc. | Voice control of playback device using voice assistant service(s) |
US10021503B2 (en) | 2016-08-05 | 2018-07-10 | Sonos, Inc. | Determining direction of networked microphone device relative to audio playback device |
US10115400B2 (en) | 2016-08-05 | 2018-10-30 | Sonos, Inc. | Multiple voice services |
US10565999B2 (en) | 2016-08-05 | 2020-02-18 | Sonos, Inc. | Playback device supporting concurrent voice assistant services |
US10565998B2 (en) | 2016-08-05 | 2020-02-18 | Sonos, Inc. | Playback device supporting concurrent voice assistant services |
US10847164B2 (en) | 2016-08-05 | 2020-11-24 | Sonos, Inc. | Playback device supporting concurrent voice assistants |
US11531520B2 (en) | 2016-08-05 | 2022-12-20 | Sonos, Inc. | Playback device supporting concurrent voice assistants |
US10034116B2 (en) | 2016-09-22 | 2018-07-24 | Sonos, Inc. | Acoustic position measurement |
US9794720B1 (en) | 2016-09-22 | 2017-10-17 | Sonos, Inc. | Acoustic position measurement |
US11641559B2 (en) | 2016-09-27 | 2023-05-02 | Sonos, Inc. | Audio playback settings for voice interaction |
US10582322B2 (en) | 2016-09-27 | 2020-03-03 | Sonos, Inc. | Audio playback settings for voice interaction |
US9942678B1 (en) | 2016-09-27 | 2018-04-10 | Sonos, Inc. | Audio playback settings for voice interaction |
US10117037B2 (en) | 2016-09-30 | 2018-10-30 | Sonos, Inc. | Orientation-based playback device microphone selection |
US10873819B2 (en) | 2016-09-30 | 2020-12-22 | Sonos, Inc. | Orientation-based playback device microphone selection |
US10075793B2 (en) | 2016-09-30 | 2018-09-11 | Sonos, Inc. | Multi-orientation playback device microphones |
US10313812B2 (en) | 2016-09-30 | 2019-06-04 | Sonos, Inc. | Orientation-based playback device microphone selection |
US11516610B2 (en) | 2016-09-30 | 2022-11-29 | Sonos, Inc. | Orientation-based playback device microphone selection |
US11308961B2 (en) | 2016-10-19 | 2022-04-19 | Sonos, Inc. | Arbitration-based voice recognition |
US11727933B2 (en) | 2016-10-19 | 2023-08-15 | Sonos, Inc. | Arbitration-based voice recognition |
US10181323B2 (en) | 2016-10-19 | 2019-01-15 | Sonos, Inc. | Arbitration-based voice recognition |
US10614807B2 (en) | 2016-10-19 | 2020-04-07 | Sonos, Inc. | Arbitration-based voice recognition |
US11183181B2 (en) | 2017-03-27 | 2021-11-23 | Sonos, Inc. | Systems and methods of multiple voice services |
US11380322B2 (en) | 2017-08-07 | 2022-07-05 | Sonos, Inc. | Wake-word detection suppression |
US11900937B2 (en) | 2017-08-07 | 2024-02-13 | Sonos, Inc. | Wake-word detection suppression |
US10475449B2 (en) | 2017-08-07 | 2019-11-12 | Sonos, Inc. | Wake-word detection suppression |
US11500611B2 (en) | 2017-09-08 | 2022-11-15 | Sonos, Inc. | Dynamic computation of system response volume |
US10445057B2 (en) | 2017-09-08 | 2019-10-15 | Sonos, Inc. | Dynamic computation of system response volume |
US11080005B2 (en) | 2017-09-08 | 2021-08-03 | Sonos, Inc. | Dynamic computation of system response volume |
US11017789B2 (en) | 2017-09-27 | 2021-05-25 | Sonos, Inc. | Robust Short-Time Fourier Transform acoustic echo cancellation during audio playback |
US11646045B2 (en) | 2017-09-27 | 2023-05-09 | Sonos, Inc. | Robust short-time fourier transform acoustic echo cancellation during audio playback |
US10446165B2 (en) | 2017-09-27 | 2019-10-15 | Sonos, Inc. | Robust short-time fourier transform acoustic echo cancellation during audio playback |
US10051366B1 (en) | 2017-09-28 | 2018-08-14 | Sonos, Inc. | Three-dimensional beam forming with a microphone array |
US11769505B2 (en) | 2017-09-28 | 2023-09-26 | Sonos, Inc. | Echo of tone interferance cancellation using two acoustic echo cancellers |
US11538451B2 (en) | 2017-09-28 | 2022-12-27 | Sonos, Inc. | Multi-channel acoustic echo cancellation |
US10621981B2 (en) | 2017-09-28 | 2020-04-14 | Sonos, Inc. | Tone interference cancellation |
US10880644B1 (en) | 2017-09-28 | 2020-12-29 | Sonos, Inc. | Three-dimensional beam forming with a microphone array |
US11302326B2 (en) | 2017-09-28 | 2022-04-12 | Sonos, Inc. | Tone interference cancellation |
US10891932B2 (en) | 2017-09-28 | 2021-01-12 | Sonos, Inc. | Multi-channel acoustic echo cancellation |
US10511904B2 (en) | 2017-09-28 | 2019-12-17 | Sonos, Inc. | Three-dimensional beam forming with a microphone array |
US10482868B2 (en) | 2017-09-28 | 2019-11-19 | Sonos, Inc. | Multi-channel acoustic echo cancellation |
US10606555B1 (en) | 2017-09-29 | 2020-03-31 | Sonos, Inc. | Media playback system with concurrent voice assistance |
US11175888B2 (en) | 2017-09-29 | 2021-11-16 | Sonos, Inc. | Media playback system with concurrent voice assistance |
US11893308B2 (en) | 2017-09-29 | 2024-02-06 | Sonos, Inc. | Media playback system with concurrent voice assistance |
US10466962B2 (en) | 2017-09-29 | 2019-11-05 | Sonos, Inc. | Media playback system with voice assistance |
US11288039B2 (en) | 2017-09-29 | 2022-03-29 | Sonos, Inc. | Media playback system with concurrent voice assistance |
US10880650B2 (en) | 2017-12-10 | 2020-12-29 | Sonos, Inc. | Network microphone devices with automatic do not disturb actuation capabilities |
US11451908B2 (en) | 2017-12-10 | 2022-09-20 | Sonos, Inc. | Network microphone devices with automatic do not disturb actuation capabilities |
US11676590B2 (en) | 2017-12-11 | 2023-06-13 | Sonos, Inc. | Home graph |
US10818290B2 (en) | 2017-12-11 | 2020-10-27 | Sonos, Inc. | Home graph |
US11343614B2 (en) | 2018-01-31 | 2022-05-24 | Sonos, Inc. | Device designation of playback and network microphone device arrangements |
US11689858B2 (en) | 2018-01-31 | 2023-06-27 | Sonos, Inc. | Device designation of playback and network microphone device arrangements |
US11175880B2 (en) | 2018-05-10 | 2021-11-16 | Sonos, Inc. | Systems and methods for voice-assisted media content selection |
US11797263B2 (en) | 2018-05-10 | 2023-10-24 | Sonos, Inc. | Systems and methods for voice-assisted media content selection |
US10847178B2 (en) | 2018-05-18 | 2020-11-24 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection |
US11715489B2 (en) | 2018-05-18 | 2023-08-01 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection |
US11792590B2 (en) | 2018-05-25 | 2023-10-17 | Sonos, Inc. | Determining and adapting to changes in microphone performance of playback devices |
US10959029B2 (en) | 2018-05-25 | 2021-03-23 | Sonos, Inc. | Determining and adapting to changes in microphone performance of playback devices |
US10681460B2 (en) | 2018-06-28 | 2020-06-09 | Sonos, Inc. | Systems and methods for associating playback devices with voice assistant services |
US11197096B2 (en) | 2018-06-28 | 2021-12-07 | Sonos, Inc. | Systems and methods for associating playback devices with voice assistant services |
US11696074B2 (en) | 2018-06-28 | 2023-07-04 | Sonos, Inc. | Systems and methods for associating playback devices with voice assistant services |
US11076035B2 (en) | 2018-08-28 | 2021-07-27 | Sonos, Inc. | Do not disturb feature for audio notifications |
US11563842B2 (en) | 2018-08-28 | 2023-01-24 | Sonos, Inc. | Do not disturb feature for audio notifications |
US10797667B2 (en) | 2018-08-28 | 2020-10-06 | Sonos, Inc. | Audio notifications |
US11482978B2 (en) | 2018-08-28 | 2022-10-25 | Sonos, Inc. | Audio notifications |
JP2020042699A (ja) * | 2018-09-13 | 2020-03-19 | 株式会社ユピテル | システムおよびプログラム等 |
JP7090332B2 (ja) | 2018-09-13 | 2022-06-24 | 株式会社ユピテル | システムおよびプログラム等 |
US11551690B2 (en) | 2018-09-14 | 2023-01-10 | Sonos, Inc. | Networked devices, systems, and methods for intelligently deactivating wake-word engines |
US10587430B1 (en) | 2018-09-14 | 2020-03-10 | Sonos, Inc. | Networked devices, systems, and methods for associating playback devices based on sound codes |
US11432030B2 (en) | 2018-09-14 | 2022-08-30 | Sonos, Inc. | Networked devices, systems, and methods for associating playback devices based on sound codes |
US11778259B2 (en) | 2018-09-14 | 2023-10-03 | Sonos, Inc. | Networked devices, systems and methods for associating playback devices based on sound codes |
US10878811B2 (en) | 2018-09-14 | 2020-12-29 | Sonos, Inc. | Networked devices, systems, and methods for intelligently deactivating wake-word engines |
US11790937B2 (en) | 2018-09-21 | 2023-10-17 | Sonos, Inc. | Voice detection optimization using sound metadata |
US11024331B2 (en) | 2018-09-21 | 2021-06-01 | Sonos, Inc. | Voice detection optimization using sound metadata |
US10573321B1 (en) | 2018-09-25 | 2020-02-25 | Sonos, Inc. | Voice detection optimization based on selected voice assistant service |
US10811015B2 (en) | 2018-09-25 | 2020-10-20 | Sonos, Inc. | Voice detection optimization based on selected voice assistant service |
US11727936B2 (en) | 2018-09-25 | 2023-08-15 | Sonos, Inc. | Voice detection optimization based on selected voice assistant service |
US11031014B2 (en) | 2018-09-25 | 2021-06-08 | Sonos, Inc. | Voice detection optimization based on selected voice assistant service |
US11790911B2 (en) | 2018-09-28 | 2023-10-17 | Sonos, Inc. | Systems and methods for selective wake word detection using neural network models |
US11100923B2 (en) | 2018-09-28 | 2021-08-24 | Sonos, Inc. | Systems and methods for selective wake word detection using neural network models |
US10692518B2 (en) | 2018-09-29 | 2020-06-23 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection via multiple network microphone devices |
US11501795B2 (en) | 2018-09-29 | 2022-11-15 | Sonos, Inc. | Linear filtering for noise-suppressed speech detection via multiple network microphone devices |
US11899519B2 (en) | 2018-10-23 | 2024-02-13 | Sonos, Inc. | Multiple stage network microphone device with reduced power consumption and processing load |
US11741948B2 (en) | 2018-11-15 | 2023-08-29 | Sonos Vox France Sas | Dilated convolutions and gating for efficient keyword spotting |
US11200889B2 (en) | 2018-11-15 | 2021-12-14 | Sonos, Inc. | Dilated convolutions and gating for efficient keyword spotting |
US11215473B2 (en) | 2018-11-27 | 2022-01-04 | Toyota Jidosha Kabushiki Kaisha | Driving support device, driving support Method, and non-transitory recording medium in which program is stored |
US11557294B2 (en) | 2018-12-07 | 2023-01-17 | Sonos, Inc. | Systems and methods of operating media playback systems having multiple voice assistant services |
US11183183B2 (en) | 2018-12-07 | 2021-11-23 | Sonos, Inc. | Systems and methods of operating media playback systems having multiple voice assistant services |
US11538460B2 (en) | 2018-12-13 | 2022-12-27 | Sonos, Inc. | Networked microphone devices, systems, and methods of localized arbitration |
US11132989B2 (en) | 2018-12-13 | 2021-09-28 | Sonos, Inc. | Networked microphone devices, systems, and methods of localized arbitration |
US11540047B2 (en) | 2018-12-20 | 2022-12-27 | Sonos, Inc. | Optimization of network microphone devices using noise classification |
US11159880B2 (en) | 2018-12-20 | 2021-10-26 | Sonos, Inc. | Optimization of network microphone devices using noise classification |
US10602268B1 (en) | 2018-12-20 | 2020-03-24 | Sonos, Inc. | Optimization of network microphone devices using noise classification |
US10867604B2 (en) | 2019-02-08 | 2020-12-15 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing |
US11646023B2 (en) | 2019-02-08 | 2023-05-09 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing |
US11315556B2 (en) | 2019-02-08 | 2022-04-26 | Sonos, Inc. | Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification |
US11798553B2 (en) | 2019-05-03 | 2023-10-24 | Sonos, Inc. | Voice assistant persistence across multiple network microphone devices |
US11120794B2 (en) | 2019-05-03 | 2021-09-14 | Sonos, Inc. | Voice assistant persistence across multiple network microphone devices |
US11200894B2 (en) | 2019-06-12 | 2021-12-14 | Sonos, Inc. | Network microphone device with command keyword eventing |
US11854547B2 (en) | 2019-06-12 | 2023-12-26 | Sonos, Inc. | Network microphone device with command keyword eventing |
US10586540B1 (en) | 2019-06-12 | 2020-03-10 | Sonos, Inc. | Network microphone device with command keyword conditioning |
US11361756B2 (en) | 2019-06-12 | 2022-06-14 | Sonos, Inc. | Conditional wake word eventing based on environment |
US11501773B2 (en) | 2019-06-12 | 2022-11-15 | Sonos, Inc. | Network microphone device with command keyword conditioning |
US11710487B2 (en) | 2019-07-31 | 2023-07-25 | Sonos, Inc. | Locally distributed keyword detection |
US11138975B2 (en) | 2019-07-31 | 2021-10-05 | Sonos, Inc. | Locally distributed keyword detection |
US11714600B2 (en) | 2019-07-31 | 2023-08-01 | Sonos, Inc. | Noise classification for event detection |
US11354092B2 (en) | 2019-07-31 | 2022-06-07 | Sonos, Inc. | Noise classification for event detection |
US10871943B1 (en) | 2019-07-31 | 2020-12-22 | Sonos, Inc. | Noise classification for event detection |
US11551669B2 (en) | 2019-07-31 | 2023-01-10 | Sonos, Inc. | Locally distributed keyword detection |
US11138969B2 (en) | 2019-07-31 | 2021-10-05 | Sonos, Inc. | Locally distributed keyword detection |
US11189286B2 (en) | 2019-10-22 | 2021-11-30 | Sonos, Inc. | VAS toggle based on device orientation |
US11862161B2 (en) | 2019-10-22 | 2024-01-02 | Sonos, Inc. | VAS toggle based on device orientation |
US11200900B2 (en) | 2019-12-20 | 2021-12-14 | Sonos, Inc. | Offline voice control |
US11869503B2 (en) | 2019-12-20 | 2024-01-09 | Sonos, Inc. | Offline voice control |
US11562740B2 (en) | 2020-01-07 | 2023-01-24 | Sonos, Inc. | Voice verification for media playback |
US11556307B2 (en) | 2020-01-31 | 2023-01-17 | Sonos, Inc. | Local voice data processing |
US11961519B2 (en) | 2020-02-07 | 2024-04-16 | Sonos, Inc. | Localized wakeword verification |
US11308958B2 (en) | 2020-02-07 | 2022-04-19 | Sonos, Inc. | Localized wakeword verification |
US11308962B2 (en) | 2020-05-20 | 2022-04-19 | Sonos, Inc. | Input detection windowing |
US11727919B2 (en) | 2020-05-20 | 2023-08-15 | Sonos, Inc. | Memory allocation for keyword spotting engines |
US11482224B2 (en) | 2020-05-20 | 2022-10-25 | Sonos, Inc. | Command keywords with input detection windowing |
US11698771B2 (en) | 2020-08-25 | 2023-07-11 | Sonos, Inc. | Vocal guidance engines for playback devices |
US11984123B2 (en) | 2020-11-12 | 2024-05-14 | Sonos, Inc. | Network device interaction by range |
US11551700B2 (en) | 2021-01-25 | 2023-01-10 | Sonos, Inc. | Systems and methods for power-efficient keyword detection |
Also Published As
Publication number | Publication date |
---|---|
JP6037026B2 (ja) | 2016-11-30 |
US20160225367A1 (en) | 2016-08-04 |
US10163435B2 (en) | 2018-12-25 |
JPWO2015037396A1 (ja) | 2017-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6037026B2 (ja) | 音声出力制御装置、プログラムおよび記録媒体 | |
JP6020189B2 (ja) | 音声出力制御装置 | |
US7880602B2 (en) | Image display control apparatus | |
JP6221945B2 (ja) | 表示・音声出力制御装置 | |
JP5662273B2 (ja) | 割り込み制御装置および割り込み制御方法 | |
JP5304853B2 (ja) | 連携システム、ナビゲーションシステム、車載装置、及び、携帯端末 | |
JP6343188B2 (ja) | 情報提示装置、情報提示方法、及びプログラム | |
US20170287476A1 (en) | Vehicle aware speech recognition systems and methods | |
KR102418660B1 (ko) | 차량의 혼잡 제어를 위한 임베디드 시스템, 차량의 혼잡 제어 방법 | |
JP5733057B2 (ja) | プラットフォーム装置、プログラム、及びシステム | |
JP7294200B2 (ja) | 情報処理装置、車両システム、情報処理方法、およびプログラム | |
WO2018134198A1 (en) | Communication control apparatus and method | |
EP3831638A1 (en) | Display control device, vehicle, display control method, and storage medium storing program | |
US20190221111A1 (en) | Information processing device and information processing method | |
EP4385846A1 (en) | Method and apparatus for prompting state information of vehicle | |
JP7377043B2 (ja) | 操作受付装置及びプログラム | |
CN117191071A (zh) | 车载导航远程预约方法、系统、计算机设备及存储介质 | |
JP2019137320A (ja) | 情報処理装置および情報処理方法 | |
WO2015087493A1 (ja) | データ処理装置及びメッセージ処理プログラム製品 | |
CN116009798A (zh) | 作业方法和图像形成设备 | |
EP4325395A2 (en) | Hybrid rule engine for vehicle automation | |
US20230166658A1 (en) | Method, computer program and apparatus for playback of messages in a vehicle | |
CN116331240A (zh) | 基于车辆方向盘的交互控制方法、装置、设备及介质 | |
CN118012301A (zh) | 车机控制方法及装置 | |
CN114516339A (zh) | 信息提醒方法、装置、设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14843871 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015536504 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14917439 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14843871 Country of ref document: EP Kind code of ref document: A1 |