WO2016183825A1 - 一种定位声音发出位置的方法和终端设备 - Google Patents
一种定位声音发出位置的方法和终端设备 Download PDFInfo
- Publication number
- WO2016183825A1 WO2016183825A1 PCT/CN2015/079391 CN2015079391W WO2016183825A1 WO 2016183825 A1 WO2016183825 A1 WO 2016183825A1 CN 2015079391 W CN2015079391 W CN 2015079391W WO 2016183825 A1 WO2016183825 A1 WO 2016183825A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- sound signals
- sound signal
- terminal device
- voice commands
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000005236 sound signal Effects 0.000 claims abstract description 236
- 239000000284 extract Substances 0.000 description 16
- 230000006870 function Effects 0.000 description 13
- 238000004378 air conditioning Methods 0.000 description 12
- 230000004044 response Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000005484 gravity Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0316—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
- G10L21/0364—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/02—Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
- B60R11/0217—Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof for loud-speakers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0272—Voice signal separating
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/406—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/037—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
- B60R16/0373—Voice control
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L2021/02082—Noise filtering the noise being echo, reverberation of the speech
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02166—Microphone arrays; Beamforming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/02—Details casings, cabinets or mounting therein for transducers covered by H04R1/02 but not provided for in any of its subgroups
- H04R2201/025—Transducer mountings or cabinet supports enabling variable orientation of transducer of cabinet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
- H04R2430/21—Direction finding using differential microphone array [DMA]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
- H04R2430/23—Direction finding using a sum-delay beam-former
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/13—Acoustic transducers and sound field adaptation in vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
Definitions
- the embodiments of the present invention relate to the field of mobile communications, and in particular, to a method and a terminal device for locating a sound emitting position.
- Speech recognition is the core technology point of the human-computer interaction interface of intelligent information systems.
- a sound acquisition sensor is generally used to collect sound signals, and sound signal acquisition and speech recognition are performed for the position of the sound.
- the scheme for improving the success rate of speech recognition can only extract the sound signal emitted by one position, and the sound signal emitted from other positions can only be extracted as noise and cannot accurately extract and locate the sound issuing position, and can not perform speech recognition.
- a sound signal collected in an in-vehicle system can be used to collect sound signals in the surrounding environment, extracting sound signals emitted from the main cab, and extracting the extracted slaves from the main
- the sound signal from the cab is used for voice recognition, and the in-vehicle system can respond to sound signals from the main cab.
- the sound signal emitted from the passenger cab or the sound signal emitted from the rear seat of the vehicle is filtered by the in-vehicle system as noise, and the sound emitting position cannot be accurately extracted and positioned, and speech recognition is impossible.
- the in-vehicle system can extract and voice-recognize the voice command of the "open skylight” issued from the main cab, but cannot extract the voice command of the "open skylight” from the passenger's cab or other positions such as the rear seat of the car. Position the position of other sound signals in the vehicle system.
- the in-vehicle system cannot efficiently and accurately locate the emitting position of other sound signals in the car, thereby reducing the efficiency of the position where the positioning sound signal is emitted, and the user experience is low.
- Embodiments of the present invention provide a method and a terminal device for locating a sound emitting position, so as to solve the problem that only sound information emitted from a single location can be located and extracted, and sound signals emitted from other locations cannot be located and extracted.
- a first aspect of the present invention provides a method for locating a sound emitting position, comprising: acquiring K first sound signals; wherein K is an integer greater than or equal to 2; corresponding to N different positions N position parameters, extracting M second sound signals from the K first sound signals; wherein M is less than or equal to N, N is an integer greater than or equal to 2; determining a position corresponding to each second sound signal.
- the extracting the M second sound signals from the K first sound signals according to the N position parameters corresponding to the N different positions includes: using a beamforming algorithm And extracting M second sound signals from the K first sound signals according to the N position parameters respectively.
- the determining a location corresponding to each second sound signal specifically: determining, according to a position parameter corresponding to the Lth second sound signal, the Lth a position L corresponding to the second sound signal; wherein the Lth second sound signal is any one of the M second sound signals.
- the method further includes: And extracting the M second sound signals to perform voice recognition; and acquiring M voice commands corresponding to the M second sound signals.
- the method further includes: responding to the M voice commands.
- responding to the M voice commands includes: prior to priority of M different locations corresponding to the M voice commands, Respond to high priority voice commands.
- a second aspect of the present invention provides a terminal device, comprising: K sound collection sensors, configured to acquire K first sound signals; wherein K is an integer greater than or equal to 2; And for extracting M second sound signals from the K first sound signals according to N position parameters corresponding to the N different positions, and determining a position corresponding to each second sound signal, where M is less than or equal to N, N is an integer greater than or equal to 2.
- the processor is configured to extract M second sound signals from the K first sound signals according to N position parameters corresponding to the N different positions, specifically including:
- the processor is configured to extract M second sound signals from the K first sound signals according to the N position parameters by using a beamforming algorithm.
- the determining, by the processor, the position corresponding to each second sound signal specifically: determining, according to the position parameter corresponding to the Lth second sound signal, a position L corresponding to the Lth second sound signal; wherein, the L The second sound signals are any one of the M second sound signals.
- the processor is further configured to extract M second sound signals from the K first sound signals. And performing the voice recognition on the extracted M second sound signals, and acquiring the M voice commands corresponding to the M second sound signals.
- the terminal device further includes an output device, where the output device is configured to acquire the M at the processor After the M voice commands corresponding to the second sound signals, the M voice commands are responded to.
- the output device is configured to respond to the M voice commands, and specifically includes:
- the output device is configured to preferentially respond to a command with a high priority according to priorities of M different locations corresponding to the M voice commands.
- the coordinates of the K sound collection sensors in a three-dimensional space are different.
- a third aspect of the present invention provides a device for locating a sound emitting position, the device comprising: an acquisition module, an extraction module and a determination module, wherein the acquisition module is configured to acquire K first sound signals; wherein, K An integer that is greater than or equal to 2; the extraction module is configured to extract M second sound signals from the K first sound signals according to N position parameters corresponding to the N different positions; wherein M is less than or equal to N, N is an integer greater than or equal to 2; the Susong determination module is used to determine a position corresponding to each second sound signal.
- the extracting module is configured to extract M second sound signals from the K first sound signals according to the N position parameters corresponding to the N different positions, and specifically includes: Using the beamforming algorithm, M second sound signals are extracted from the K first sound signals according to the N position parameters, respectively.
- the determining module is configured to determine a location corresponding to each second sound signal, specifically, the determining module is used by Determining, according to the position parameter corresponding to the Lth second sound signal, a position L corresponding to the Lth second sound signal; wherein, the Lth second sound signal is the M second sound signals Any one of them.
- the apparatus further includes a voice recognition module and an acquisition module, where the voice recognition module is used by the extraction module After extracting the M second sound signals from the K first sound signals, performing voice recognition on the extracted M second sound signals; the acquiring module is configured to acquire the M second sound signals Corresponding M voice commands.
- the apparatus further includes a response module, where the response module is configured to acquire the M second sounds in the acquiring module After the M voice commands corresponding to the signal, the response module is configured to respond to the M voice commands.
- the responding module in response to the M voice commands, includes: prioritizing a high priority according to priorities of M different locations corresponding to the M voice commands Level of voice commands.
- the embodiment of the present invention has the following advantages: using the beamforming algorithm, extracting M second sound signals from the K first sound signals according to the position parameters, thereby determining each second sound signal. Corresponding issuing position, by this method, it is possible to efficiently extract sound signals from different positions, provide voice recognition capability, and provide users with a higher user experience.
- the conflicting command is processed by the priority method to reduce the error caused by the on-board central control device responding to multiple commands at the same time.
- FIG. 1 is a flowchart of a method for positioning a sound emitting position according to an embodiment of the present invention
- FIG. 2A is a schematic view showing the position of an interior compartment of a vehicle for positioning a sound emitting position according to an embodiment of the present invention
- 2B is a schematic view showing the position of an interior compartment of a vehicle for positioning a sound emitting position according to another embodiment of the present invention
- FIG. 3 is a flowchart of a method for positioning a sound emitting position according to another embodiment of the present invention.
- FIG. 3A is a flowchart of a method for positioning a sound emitting position according to another embodiment of the present invention.
- FIG. 3B is a flowchart of a method for positioning a sound emitting position according to another embodiment of the present invention.
- FIG. 4 is a schematic structural diagram of a terminal device 400 according to an embodiment of the present invention.
- the embodiment of the present invention provides a method for locating a sound emitting position.
- the terminal device may be an in-vehicle central control device, a smart phone, a tablet computer, or the like.
- the first sound signal or the second sound signal is only used for distinguishing, and does not represent order or order.
- the application scenario of the embodiment of the present invention may be any sound collection and voice recognition scenario.
- the sound in the vehicle system is used. Taking acquisition and speech recognition as an example, the method includes the following steps:
- K sound collection sensors inside the vehicle system, and the processor can acquire K first sound signals; wherein K is an integer greater than or equal to 2.
- K can be set to 2, that is, a first sound collection sensor and a second sound collection sensor can be installed in the cab and the passenger cab, respectively.
- the first sound collection sensor and the second sound collection sensor simultaneously acquire the first sound signal.
- other sound collection sensors can be installed in the rear seat of the vehicle or other locations in the vehicle.
- the first sound signal is an ambient sound inside the in-vehicle system, and includes a sound signal emitted from different positions in the vehicle and a sound signal outside the vehicle.
- the first sound signal may include a sound signal emitted from a cab position (eg, position 1 as shown in FIG. 2A), which is emitted from a passenger's cab position (eg, position 2 as shown in FIG. 2A).
- the sound signal is at least one of a sound signal emitted from a position of the rear seat of the in-vehicle system (for example, position 3 and position 4 as shown in FIG. 2A) and noise outside the in-vehicle system.
- the coordinates of the first sound collection sensor and the second sound collection sensor are not coincident in the spatial position, and the first sound collection sensor and the second sound collection sensor are separated by a certain distance.
- the first sound collection sensor and the second sound collection sensor are respectively disposed on the left and right sides of the center rear view mirror A of the in-vehicle system.
- the first sound collection sensor is disposed at the position C of the in-vehicle system
- the second sound collection sensor is disposed at the position B of the in-vehicle system. Therefore, the time of the sound signal collected by the first sound collecting sensor and the second sound collecting sensor is different, so that the sound signal collected by the first sound collecting sensor and the sound signal collected by the second sound collecting sensor form a sound signal. Phase difference.
- the in-vehicle system includes four sound collection sensors, and at this time, K is 4.
- the four sound collection sensors are disposed at a central position of the in-vehicle system as shown in FIG. 2B.
- Extracting M second sound signals from the K first sound signals specifically, extracting M second sound signals from the K first sound signals by using a beamforming algorithm; or using beamforming
- the algorithm filters out other sound signals from the K first sound signals and extracts M second sound signals.
- the position where the sound signal is emitted is the cab position
- the corresponding position parameter is the parameter of the cab position
- the vehicle-mounted central control device extracts from the K first sound signals according to the position parameter of the cab corresponding to the cab.
- the second sound signal is extracted according to the position parameter of the cab by using a beamforming algorithm, and the extracted number is determined according to the position parameter corresponding to the second sound signal.
- the corresponding sounding position of the second sound signal is the cab.
- the invention provides a method for locating a sound emitting position, which uses a beamforming algorithm to extract M second sound signals from K first sound signals according to positional parameters, thereby determining the corresponding sounding of each second sound signal. position. In this way, the sound signals emitted from different locations can be efficiently extracted, the voice recognition capability is improved, and the user is provided with a higher user experience.
- FIG. 3 is a flow chart of a method for locating a sound emitting position according to another embodiment of the present invention.
- the embodiment of the present invention is also applied to an in-vehicle system as an example. As shown in FIG. 3, the method includes the following steps:
- position 1 is the cab position
- position 2 is the passenger cab position
- position 3 is the left side position of the rear seat of the vehicle system
- position 4 is the rear seat of the vehicle system. The location on the right.
- the in-vehicle central control device sets the priority of voice commands in response to four different positions in the in-vehicle system according to four different positions.
- the voice command priority set by an ordinary family car is taken as an example.
- the command "air conditioner start” and position 4 are simultaneously issued at position 1 and the command “air conditioner off” is taken as an example.
- K is 2 as an example.
- the first sound collection sensor and the second sound collection sensor are respectively mounted on the left and right sides of the rear view mirror A.
- the first sound collection sensor and the second sound collection sensor simultaneously acquire the first sound signal.
- other sound collection sensors can be installed in the rear seat of the vehicle or other locations in the vehicle.
- the first sound collecting sensor and the second sound collecting sensor simultaneously collect the command issued by the position 1 "air conditioner.
- the sound signal is activated, and the first sound collection sensor and the second sound collection sensor simultaneously acquire the sound signal of the command "air conditioner off” issued by the position 4.
- N is 4 and M is 2 for illustration.
- the first sound collection sensor and the second sound collection sensor are separated by a certain distance. Therefore, the time of the sound signal collected by the first sound collecting sensor and the second sound collecting sensor is different, so that the sound signal collected by the first sound collecting sensor and the sound signal collected by the second sound collecting sensor form a phase difference.
- the first sound collection sensor and the second sound collection sensor are disposed on the left and right rear view mirrors.
- the present invention does not limit the number of sound collection sensors, and the position of the sound collection sensor is not limited.
- other sound collecting sensors may be placed beside the position where the sound may be emitted, such as at the rear side of the seat at position 1 or position 2 as shown in FIG. 2A.
- the in-vehicle central control device extracts the position from the position 1 according to the preset position parameter of the position 1. Second sound signal. Using the beamforming algorithm, the in-vehicle central control device extracts the second sound signal emitted from position 1 from the acquired first sound signal according to the positional parameter of the preset position 1.
- the in-vehicle central control device extracts the second sound signal emitted from the position 4 according to the positional parameter of the preset position 4.
- the in-vehicle central control device extracts the second sound signal emitted from position 4 from the acquired first sound signal according to the positional parameter of the preset position 4.
- the in-vehicle central control device utilizes a beamforming algorithm to extract a sound signal that conforms to the positional parameter of the preset position 1 based on the positional parameter of position 1. For example, an "air conditioning start" sound signal sent from the position 1 is acquired; the vehicle central control device uses a beamforming algorithm to extract a sound signal conforming to the preset position parameter of the position 2 according to the position parameter of the position 4. For example, an acoustic signal "air conditioning off" from position 4 is collected.
- the in-vehicle central control device uses the beamforming algorithm to extract two second sound signals from the two first sound signals according to the four position parameters.
- the second sound signal emitted from the position 1 is extracted according to the position parameter of the position 1 by using a beamforming algorithm, and determined according to the position parameter corresponding to the second sound signal.
- the emitted position corresponding to the extracted second sound signal is position 1.
- the in-vehicle central control device performs speech recognition on the extracted sound signal to identify the extracted sound signal.
- the vehicle-mounted central control device performs voice recognition on the sound signal extracted from the position 1, and recognizes the extracted sound signal as “air-conditioning start”; the vehicle-mounted central control device performs voice recognition on the sound signal extracted from the position 4, and recognizes and extracts The sound signal is "air conditioner off”.
- the in-vehicle central control device acquires the voice command corresponding to the extracted M second sound signals.
- the in-vehicle central control device acquires a voice command corresponding to the sound signal emitted by the extracted position 1 to acquire a voice command of “air conditioning start”; and the in-vehicle central control device acquires a sound signal corresponding to the sound signal emitted by the extracted position 4 Voice command to get the voice command of "air conditioner off”.
- the in-vehicle central control device responds to the M voice commands according to the acquired voice commands corresponding to the extracted M second sound signals.
- the air conditioner is activated in response to the voice command.
- the in-vehicle central control device performs speech recognition on the sound signal extracted at position 1 and the sound signal extracted at position 4, and recognizes the extracted sound signal.
- the in-vehicle central control device performs speech recognition on the extracted sound signal emitted from the position 1 and the sound signal emitted from the position 4, and recognizes the extracted sound signal.
- the in-vehicle central control device responds to the two voice commands according to the acquired "air conditioning start” issued by the extracted position 1 and the "air conditioning off” voice command issued by the position 4.
- the voice command of the two different positions corresponding to the two voice commands is preferentially responded to the high priority voice command, for example, the location.
- the priority of 1 is higher than the priority of position 4.
- the in-vehicle central control device first responds to the voice command "air conditioning start" of position 1, thereby turning on the air conditioner.
- the in-vehicle central control device responds to the voice command "air conditioner off" of position 4, at this time, because the voice command of the vehicle-mounted central control device responding to position 1 is “air conditioner on”, the voice command of position 4 is “air conditioner off", the voice of position 1
- the command and the voice command of position 4 are conflict commands, and the in-vehicle central control device cannot simultaneously respond to the voice command of position 1 and the voice command of position 4. Therefore, after the vehicle-mounted central control device performs voice recognition on the sound signal of the position 4, the voice command corresponding to the extracted sound signal is acquired, and the voice command of the position 4 is not responded.
- the conflict command is processed by the priority method, when the vehicle-mounted central control device responds to multiple conflict commands, the in-vehicle central control device cannot make a correct response due to the command conflict, thereby reducing the error caused by the response error.
- the conflicting command is specifically: if at least two commands use the same resource and execute the at least two commands, the operations of the same resource used are different, and the at least two commands are conflicting commands. .
- the vehicle-mounted central control device when the acquired two voice commands collide, the time judgment factor is added, and the vehicle-mounted central control device recognizes the conflict command command within the preset time T1 after the high-priority command is recognized. However, when the priority of the identified conflict command is low, the command command with lower priority is ignored. If the in-vehicle central control device recognizes the conflict command after the preset time T1 after the high priority command is recognized, the in-vehicle central control device sequentially responds to the acquired voice command according to the time sequence of the recognized voice command.
- FIG. 3A is a flowchart of a method for locating a sound emitting position according to another embodiment of the present invention.
- the following steps may be performed:
- S401 Determine whether at least one seat of the in-vehicle system is sitting.
- the in-vehicle system can determine whether the seat of the in-vehicle system is seated by means of gravity sensing.
- whether the seat of the in-vehicle system in FIG. 2A is seated is determined by gravity sensing. For example, it is judged whether position 1, position 2, position 3 or position 4 in Fig. 2A is sitting.
- step S301 is not performed.
- step S301 is performed.
- the sound signal Before collecting the sound signal, it is first determined whether at least one seat of the vehicle system is sitting, and only sitting on the seat of the vehicle system, and then positioning the position of the sound, improving the efficiency of sound collection and improving the efficiency of determining the position of the sound.
- step S305a may be performed: identifying the extracted voiceprints of the M second sound signals.
- S305b Measure the weight of the user on the seat of the vehicle system.
- S305c determining the identity of the user in combination with the measured weight of the user and the voiceprint of the identified second sound signal.
- S305d Determine, according to the determined identity of the user, a priority of a voice command corresponding to the second sound signal sent by the user.
- S305e respond to the voice command corresponding to the second sound signal according to a priority of the voice command corresponding to the second sound signal sent by the user.
- the priority of the user's identity and the voice command corresponding to the voice signal sent by the user is determined.
- the priority of responding to the plurality of voice commands is determined in conjunction with the priority of the voice command corresponding to the voice signal sent by the user.
- the invention provides a method for locating a sound emitting position, which uses a beamforming algorithm to extract M second sound signals from K first sound signals according to positional parameters, thereby determining the corresponding sounding of each second sound signal. position. Further, set the priority of the voice command, through the excellent First, the high-priority method is processed to process the conflict command, and the conflict caused by the in-vehicle central control device responding to multiple conflicting commands is reduced, the error caused by the response error is reduced, and the user experience is improved.
- FIG. 4 is a schematic diagram of a terminal device 400 according to an embodiment of the present invention. It can be used to perform the aforementioned methods of the embodiments of the present invention.
- the terminal device 400 may be a terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sale), an in-vehicle central control terminal device, and a terminal device.
- the 400 includes an RF (Radio Frequency) circuit 410, a memory 420, an input device 430, a display device 440, a sensor 450, an audio circuit 460, a WiFi (wireless fidelity) module 470, a processor 480, and a power source 490. component.
- RF Radio Frequency
- FIG. 4 is only an example of implementation, and does not constitute a limitation on the terminal device, and may include more or less components than those illustrated, or may combine some components. , or different parts layout.
- the RF circuit 410 can be used for transmitting and receiving information or during a call, and receiving and transmitting the signal. Specifically, after receiving the downlink information of the base station, the processor 480 processes the data. In addition, the uplink data is designed to be sent to the base station.
- the RF circuit 410 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like.
- the RF circuit 410 can also communicate with the network and other terminal devices through wireless communication.
- the wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access). , Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.
- GSM Global System of Mobile communication
- GPRS General Packet Radio Service
- CDMA Code Division Multiple Access
- WCDMA Wideband Code Division Multiple Access
- LTE Long Term Evolution
- e-mail Short Messaging Service
- the memory 420 can be used to store software programs and modules, and the processor 480 executes various functional applications and data processing of the terminal device 400 by running software programs and modules stored in the memory 420.
- the memory 420 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to The data created by the use of the terminal device 400 (such as audio data, phone book, etc.) and the like.
- memory 420 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
- the display device 440 can be used to display information input by the user or information provided to the user and various menus of the terminal device 400.
- the display device 440 may include a display panel 441.
- the display panel 441 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
- the touch panel 431 can cover the display panel 441. When the touch panel 431 detects a touch operation on or near the touch panel 431, it transmits to the processor 480 to determine the type of the touch event, and then the processor 480 according to the touch event. The type provides a corresponding visual output on display panel 441.
- the touch panel 431 and the display panel 441 function as two separate components to implement input and input functions of the terminal device 400.
- the touch panel 431 and the display panel 441 can be integrated to implement the input and output functions of the terminal device 400.
- the touch panel 431 and the display panel 441 can be integrated into a touch screen to implement the terminal device 400. Input and output functions.
- Terminal device 400 may also include at least one type of sensor 450, such as a light sensor, motion sensor, and other sensors.
- the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 441 according to the brightness of the ambient light, and the proximity sensor may close the display panel 441 when the terminal device 400 moves to the ear. Or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
- gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the gyroscope, barometer, hygrometer, thermometer, infrared sensor and other sensors that can be configured in the terminal device 400, here No longer.
- the audio circuit 460, the speaker 461, and the microphone 462 can provide an audio interface between the user and the terminal device 400.
- the audio circuit 460 can transmit the converted electrical data of the received audio data to the speaker 461 for conversion to the sound signal output by the speaker 461; on the other hand, the microphone 462 converts the collected sound signal into an electrical signal by the audio circuit 460. After receiving, it is converted into audio data, and then processed by the audio data output processor 480, sent to the other mobile phone via the RF circuit 410, or outputted to the memory 420 for further processing.
- the terminal device 400 can help the user to send and receive emails, browse web pages, access streaming media, etc. through the WiFi module 470, which provides wireless broadband Internet access to the user.
- FIG. 4 shows the WiFi module 470, it can be understood that it does not belong to the essential configuration of the terminal device 400, and may be omitted as needed within the scope of not changing the essence of the invention.
- Processor 480 is the control center of terminal device 400, which connects various portions of the entire handset using various interfaces and lines, by running or executing software programs and/or modules stored in memory 420, and recalling data stored in memory 420.
- the various functions and processing data of the terminal device 400 are executed to perform overall monitoring of the terminal device.
- the processor 480 may include one or more processing units; preferably, the processor 480 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
- the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 480.
- the processor 480 may specifically be a central processing unit (CPU).
- the terminal device 400 further includes a power source 490 (such as a battery) for supplying power to the various components.
- a power source 490 such as a battery
- the power source can be logically connected to the processor 480 through the power management system to manage functions such as charging, discharging, and power management through the power management system. .
- the terminal device 400 includes: K sound collection sensors 450 and a processor 480 having the following functions:
- the sound collection sensor 450 is configured to acquire K first sound signals; wherein K is an integer greater than or equal to 2.
- the coordinates of the K sound collection sensors in the three-dimensional space are different.
- the processor 480 is configured to extract M second sound signals from the K first sound signals according to N position parameters corresponding to the N different positions, and determine a position corresponding to each second sound signal, where M is less than or equal to N, and N is an integer greater than or equal to 2.
- the processor 480 is configured to determine a location corresponding to each second sound signal, specifically: determining, according to a position parameter corresponding to the Lth second sound signal, the Lth a position L corresponding to the two sound signals; wherein the Lth second sound signal is any one of the M second sound signals.
- the processor 480 is further configured to perform voice recognition on the extracted M second sound signals after extracting M second sound signals from the K first sound signals. And used to acquire M voice commands corresponding to the M second sound signals.
- the terminal device 400 further includes: an output device 510, configured to respond after the processor acquires M voice commands corresponding to the M second sound signals.
- the M voice commands are configured to respond after the processor acquires M voice commands corresponding to the M second sound signals. The M voice commands.
- the output device 510 is configured to respond to the M voice commands, and specifically includes: the output device The priority is used to respond to the priority of the M different voice positions corresponding to the M voice commands.
- the output device 510 may specifically be the audio circuit 460 or the display device 440.
- a method and a terminal device for locating a sound emitting position are provided.
- M second sound signals are extracted from K first sound signals according to position parameters, thereby determining each The corresponding sounding position of the second sound signal, by which the sound signal emitted from different positions can be efficiently extracted, and the voice recognition capability is provided, thereby providing a higher user experience for the user.
- the disclosed server and method may be implemented in other manners.
- the server embodiment described above is merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present invention.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. in.
- the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Otolaryngology (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Circuit For Audible Band Transducer (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
- Telephone Function (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
命令 | 开天窗 | 关天窗 | 开收音机 | 播音乐 |
位置① | 1 | 1 | 1 | 1 |
位置② | 1 | 1 | 2 | 2 |
位置③ | 2 | 2 | 3 | 3 |
位置④ | 2 | 2 | 4 | 4 |
Claims (13)
- 一种定位声音发出位置的方法,其特征在于,所述方法包括:采集K个第一声音信号;其中,K为大于等于2的整数;根据与N个不同位置对应的N个位置参数,从所述K个第一声音信号中提取M个第二声音信号;其中M小于等于N,N为大于等于2的整数;确定每个第二声音信号对应的位置。
- 根据权利要求1所述的方法,其特征在于,所述根据与N个不同位置对应的N个位置参数,从所述K个第一声音信号中提取M个第二声音信号,具体包括:利用波束成型算法,分别根据所述N个位置参数,从所述K个第一声音信号中提取M个第二声音信号。
- 根据权利要求1或2所述的方法,其特征在于,所述确定每个第二声音信号对应的位置,具体包括:根据第L个第二声音信号对应的位置参数,确定所述第L个第二声音信号对应的位置L;其中,第L个第二声音信号为所述M个第二声音信号中的任意一个。
- 根据权利要求1-3任一所述的方法,其特征在于,从所述K个第一声音信号中提取M个第二声音信号后,所述方法还包括:对提取的所述M个第二声音信号进行语音识别;获取所述M个第二声音信号对应的M个语音命令。
- 根据权利要求4所述的方法,其特征在于,在获取所述M个第二声音信号对应的M个语音命令之后,所述方法还包括:响应所述M个语音命令。
- 根据权利要求5所述的方法,其特征在于,所述响应所述M个语音命令包括:根据所述M个语音命令对应的M个不同位置的优先级,优先响应高优先级的语音命令。
- 一种终端设备,其特征在于,所述终端设备包括:K个声音采集传感器,用于采集K个第一声音信号;其中K为大于等于2的整数;处理器,用于根据与N个不同位置对应的N个位置参数,从所述K个第一声音信号中提取M个第二声音信号,并确定每个第二声音信号对应的位置,其中M小于等于N,N为大于等于2的整数。
- 根据权利要求7所述的终端设备,其特征在于,所述处理器用于根据与N个不同位置对应的N个位置参数,从所述K个第一声音信号中提取M个第二声音信号,具体包括:所述处理器用于利用波束成型算法,分别根据所述N个位置参数,从所述K个第一声音信号中提取M个第二声音信号。
- 根据权利要求7或8所述的终端设备,其特征在于,所述处理器用于确定每个第二声音信号对应的位置,具体包括:根据第L个第二声音信号对应的位置参数,确定所述第L个第二声音信号对应的位置L;其中,第L个第二声音信号为所述M个第二声音信号中的任意一个。
- 根据权利要求7-9任一所述的终端设备,其特征在于,所述处理器还用于从所述K个第一声音信号中提取M个第二声音信号后,对提取的所述M个第二声音信号进行语音识别,并用于获取所述M个第二声音信号对应的M个语音命令。
- 根据权利要求7-10任一所述的终端设备,其特征在于,所述终端设备还包括输出装置;所述输出装置,用于在所述处理器获取所述M个第二声音信号对应的M个语音命令之后,响应所述M个语音命令。
- 根据权利要求11所述的设备,其特征在于,所述输出装置用于响应所述M个语音命令,具体包括:所述输出装置用于根据所述M个语音命令对应的M个不同位置的优先级,优先响应优先级高的命令。
- 根据权利要求7-12所述的设备,其特征在于,所述K个声音采集传感器在三维空间内的坐标不同。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15892204.7A EP3264266B1 (en) | 2015-05-20 | 2015-05-20 | Method for positioning sounding location, and terminal device |
CN201580076714.3A CN107430524B (zh) | 2015-05-20 | 2015-05-20 | 一种定位声音发出位置的方法和终端设备 |
KR1020177030167A KR102098668B1 (ko) | 2015-05-20 | 2015-05-20 | 발음 위치 및 단말 장치 위치를 결정하는 방법 |
PCT/CN2015/079391 WO2016183825A1 (zh) | 2015-05-20 | 2015-05-20 | 一种定位声音发出位置的方法和终端设备 |
JP2017557075A JP6615227B2 (ja) | 2015-05-20 | 2015-05-20 | 音声の発生位置を特定するための方法及び端末デバイス |
US15/566,979 US10410650B2 (en) | 2015-05-20 | 2015-05-20 | Method for locating sound emitting position and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/079391 WO2016183825A1 (zh) | 2015-05-20 | 2015-05-20 | 一种定位声音发出位置的方法和终端设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016183825A1 true WO2016183825A1 (zh) | 2016-11-24 |
Family
ID=57319145
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/079391 WO2016183825A1 (zh) | 2015-05-20 | 2015-05-20 | 一种定位声音发出位置的方法和终端设备 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10410650B2 (zh) |
EP (1) | EP3264266B1 (zh) |
JP (1) | JP6615227B2 (zh) |
KR (1) | KR102098668B1 (zh) |
CN (1) | CN107430524B (zh) |
WO (1) | WO2016183825A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019069731A1 (ja) * | 2017-10-06 | 2019-04-11 | ソニー株式会社 | 情報処理装置、情報処理方法、プログラム、および移動体 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110556113A (zh) * | 2018-05-15 | 2019-12-10 | 上海博泰悦臻网络技术服务有限公司 | 基于声纹识别的车辆控制方法与云端服务器 |
DE102018212902A1 (de) | 2018-08-02 | 2020-02-06 | Bayerische Motoren Werke Aktiengesellschaft | Verfahren zum Bestimmen eines digitalen Assistenten zum Ausführen einer Fahrzeugfunktion aus einer Vielzahl von digitalen Assistenten in einem Fahrzeug, computerlesbares Medium, System, und Fahrzeug |
US10944588B2 (en) * | 2018-11-29 | 2021-03-09 | International Business Machines Corporation | Resolving conflicting commands received by an electronic device |
US11468886B2 (en) * | 2019-03-12 | 2022-10-11 | Lg Electronics Inc. | Artificial intelligence apparatus for performing voice control using voice extraction filter and method for the same |
CN110297702B (zh) * | 2019-05-27 | 2021-06-18 | 北京蓦然认知科技有限公司 | 一种多任务并行处理方法和装置 |
JP7198741B2 (ja) * | 2019-12-27 | 2023-01-04 | 本田技研工業株式会社 | 車両操作権管理装置、車両操作権管理方法及びプログラム |
KR20210133600A (ko) * | 2020-04-29 | 2021-11-08 | 현대자동차주식회사 | 차량 음성 인식 방법 및 장치 |
CN111786860B (zh) * | 2020-06-29 | 2022-04-01 | 广东美的制冷设备有限公司 | 家电及其控制方法和计算机可读存储介质 |
CN115503639A (zh) * | 2022-10-13 | 2022-12-23 | 广州小鹏汽车科技有限公司 | 语音处理方法、语音交互方法、服务器及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140294195A1 (en) * | 2013-03-28 | 2014-10-02 | Jvis-Usa, Llc | Speaker system such as a sound bar assembly having improved sound quality |
CN104442622A (zh) * | 2013-09-25 | 2015-03-25 | 现代自动车株式会社 | 用于车辆的声音控制系统和方法 |
CN104464739A (zh) * | 2013-09-18 | 2015-03-25 | 华为技术有限公司 | 音频信号处理方法及装置、差分波束形成方法及装置 |
CN104572258A (zh) * | 2013-10-18 | 2015-04-29 | 通用汽车环球科技运作有限责任公司 | 用于在车载计算机系统处处理多个音频流的方法和设备 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0418831A (ja) | 1990-05-14 | 1992-01-23 | Sony Corp | 遠隔制御装置 |
JP3863306B2 (ja) * | 1998-10-28 | 2006-12-27 | 富士通株式会社 | マイクロホンアレイ装置 |
JP3715584B2 (ja) | 2002-03-28 | 2005-11-09 | 富士通株式会社 | 機器制御装置および機器制御方法 |
JP4327510B2 (ja) | 2003-06-05 | 2009-09-09 | コニカミノルタビジネステクノロジーズ株式会社 | リモート操作システム |
CN1815556A (zh) | 2005-02-01 | 2006-08-09 | 松下电器产业株式会社 | 可利用语音命令操控车辆的方法及系统 |
US8214219B2 (en) * | 2006-09-15 | 2012-07-03 | Volkswagen Of America, Inc. | Speech communications system for a vehicle and method of operating a speech communications system for a vehicle |
US20090055180A1 (en) * | 2007-08-23 | 2009-02-26 | Coon Bradley S | System and method for optimizing speech recognition in a vehicle |
JP4547721B2 (ja) | 2008-05-21 | 2010-09-22 | 株式会社デンソー | 自動車用情報提供システム |
US8141115B2 (en) | 2008-12-17 | 2012-03-20 | At&T Labs, Inc. | Systems and methods for multiple media coordination |
US8660782B2 (en) | 2010-03-31 | 2014-02-25 | Denso International America, Inc. | Method of displaying traffic information and displaying traffic camera view for vehicle systems |
KR101987966B1 (ko) | 2012-09-03 | 2019-06-11 | 현대모비스 주식회사 | 차량용 어레이 마이크의 음성 인식 향상 시스템 및 그 방법 |
TWI598774B (zh) * | 2013-10-25 | 2017-09-11 | 和冠股份有限公司 | 電磁書寫單元及兼具墨水與電磁書寫功能的電磁式手寫筆 |
US20160012827A1 (en) * | 2014-07-10 | 2016-01-14 | Cambridge Silicon Radio Limited | Smart speakerphone |
US20160080861A1 (en) * | 2014-09-16 | 2016-03-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Dynamic microphone switching |
DE102015220400A1 (de) * | 2014-12-11 | 2016-06-16 | Hyundai Motor Company | Sprachempfangssystem im fahrzeug mittels audio-beamforming und verfahren zum steuern desselben |
US10304463B2 (en) * | 2016-10-03 | 2019-05-28 | Google Llc | Multi-user personalization at a voice interface device |
-
2015
- 2015-05-20 US US15/566,979 patent/US10410650B2/en active Active
- 2015-05-20 JP JP2017557075A patent/JP6615227B2/ja active Active
- 2015-05-20 KR KR1020177030167A patent/KR102098668B1/ko active IP Right Grant
- 2015-05-20 EP EP15892204.7A patent/EP3264266B1/en active Active
- 2015-05-20 WO PCT/CN2015/079391 patent/WO2016183825A1/zh active Application Filing
- 2015-05-20 CN CN201580076714.3A patent/CN107430524B/zh active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140294195A1 (en) * | 2013-03-28 | 2014-10-02 | Jvis-Usa, Llc | Speaker system such as a sound bar assembly having improved sound quality |
CN104464739A (zh) * | 2013-09-18 | 2015-03-25 | 华为技术有限公司 | 音频信号处理方法及装置、差分波束形成方法及装置 |
CN104442622A (zh) * | 2013-09-25 | 2015-03-25 | 现代自动车株式会社 | 用于车辆的声音控制系统和方法 |
CN104572258A (zh) * | 2013-10-18 | 2015-04-29 | 通用汽车环球科技运作有限责任公司 | 用于在车载计算机系统处处理多个音频流的方法和设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3264266A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019069731A1 (ja) * | 2017-10-06 | 2019-04-11 | ソニー株式会社 | 情報処理装置、情報処理方法、プログラム、および移動体 |
Also Published As
Publication number | Publication date |
---|---|
EP3264266B1 (en) | 2020-08-05 |
KR20170129249A (ko) | 2017-11-24 |
CN107430524A (zh) | 2017-12-01 |
US20180108368A1 (en) | 2018-04-19 |
EP3264266A1 (en) | 2018-01-03 |
EP3264266A4 (en) | 2018-03-28 |
KR102098668B1 (ko) | 2020-04-08 |
CN107430524B (zh) | 2020-10-27 |
JP2018524620A (ja) | 2018-08-30 |
JP6615227B2 (ja) | 2019-12-04 |
US10410650B2 (en) | 2019-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016183825A1 (zh) | 一种定位声音发出位置的方法和终端设备 | |
US10834237B2 (en) | Method, apparatus, and storage medium for controlling cooperation of multiple intelligent devices with social application platform | |
US11314389B2 (en) | Method for presenting content based on checking of passenger equipment and distraction | |
US10183680B2 (en) | Mobile terminal and method for controlling application for vehicle | |
US9743222B2 (en) | Method for controlling and an electronic device thereof | |
KR102301880B1 (ko) | 전자 장치 및 이의 음성 대화 방법 | |
US9578668B2 (en) | Bluetooth pairing system and method | |
US20170243578A1 (en) | Voice processing method and device | |
US20150281430A1 (en) | Method and apparatus for providing information based on movement of an electronic device | |
CN109672775B (zh) | 调节唤醒灵敏度的方法、装置及终端 | |
KR20170061489A (ko) | 전자 기기 및 이의 운송 기기 제어 방법 | |
CN108068846B (zh) | 一种地铁乘车调度方法及移动终端 | |
CN113314120B (zh) | 处理方法、处理设备及存储介质 | |
KR102611775B1 (ko) | 그룹 메시지의 전달 방법 및 이를 수행하는 전자 장치 | |
CN108702410B (zh) | 一种情景模式控制方法及移动终端 | |
CN113254092B (zh) | 处理方法、设备及存储介质 | |
EP3945706A1 (en) | Systems and methods for bluetooth authentication using communication fingerprinting | |
CN113742027A (zh) | 交互方法、智能终端及可读存储介质 | |
CN112738730A (zh) | 搜救定位方法、装置及存储介质 | |
CN114021002A (zh) | 信息展示方法、移动终端及存储介质 | |
CN114974246A (zh) | 处理方法、智能终端及存储介质 | |
CN114201584A (zh) | 模板校验方法及相关装置 | |
CN113296678A (zh) | 处理方法、移动终端及存储介质 | |
CN116682423A (zh) | 一种语音意图匹配方法、装置、智能座舱和电子设备 | |
CN109874124A (zh) | 用于无线通信抑制的方法和设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15892204 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2015892204 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15566979 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 20177030167 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017557075 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |