CN104966514A - Speech recognition method and vehicle-mounted device - Google Patents

Speech recognition method and vehicle-mounted device Download PDF

Info

Publication number
CN104966514A
CN104966514A CN201510217312.2A CN201510217312A CN104966514A CN 104966514 A CN104966514 A CN 104966514A CN 201510217312 A CN201510217312 A CN 201510217312A CN 104966514 A CN104966514 A CN 104966514A
Authority
CN
China
Prior art keywords
conditioned
instruction
location parameter
sound bite
equal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510217312.2A
Other languages
Chinese (zh)
Inventor
鲍伟
王力劭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vcyber Technology Co., Ltd.
Original Assignee
BEIJING VCYBER TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING VCYBER TECHNOLOGY Co Ltd filed Critical BEIJING VCYBER TECHNOLOGY Co Ltd
Priority to CN201510217312.2A priority Critical patent/CN104966514A/en
Publication of CN104966514A publication Critical patent/CN104966514A/en
Pending legal-status Critical Current

Links

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a speech recognition method and a vehicle-mounted device, and belongs to the field of speech recognition. The method includes the steps that speech segments input by users are received, wherein the speech segments comprise at least one command; position parameters of the positions where speech generation persons of the speech segments are located are obtained; and whether the position parameters meet preset conditions or not is judged, and the commands are executed if the position parameters meet the preset conditions, wherein the preset conditions at least comprise a first preset condition and a second preset condition, the first preset condition corresponds to the main driving position, and the second preset condition corresponds to the copilot position. Besides the command input by the user at the main driving position is identified and executed, and the command input by the user at the copilot position can be identified and executed, so that the vehicle control efficiency is improved, and the effect of the user experience is strengthened.

Description

A kind of audio recognition method and mobile unit
Technical field
The present invention relates to field of speech recognition, particularly a kind of audio recognition method and mobile unit.
Background technology
Conveniently user uses, current car-mounted terminal is integrated with speech identifying function, by receiving and identifying the phonetic order that user sends, perform the vehicle-mounted action that this phonetic order is corresponding, such as user sends voice command and " listens to the music ", this instruction of car-mounted terminal identification, perform the vehicle-mounted action of the music corresponding with " listening to the music ", but when car there being multiple people, terminal cannot confirm the person of sending of phonetic order, make speech recognition inaccurate, reduce Consumer's Experience.
Prior art provides a kind of method of speech recognition, by receiving and identifying the phonetic order that user sends, judges whether this phonetic order is that operator seat sends, if the instruction that operator seat sends, then perform this instruction, if not the instruction that operator seat sends, then ignore this instruction.
But, the vehicle-mounted action corresponding to phonetic order that but the method using prior art to provide makes car-mounted terminal only perform to be sent with main driving position, and the vehicle-mounted action corresponding with the phonetic order that other users send cannot be performed, thus reduce the control efficiency of vehicle, Consumer's Experience effect is low.
Summary of the invention
In order to improve wagon control efficiency, strengthening Consumer's Experience effect, embodiments providing a kind of audio recognition method and mobile unit.Described technical scheme is as follows:
First aspect, provides a kind of audio recognition method, and described method comprises:
Receive the sound bite of user's input, described sound bite comprises at least one instruction;
Obtain the location parameter of the person of the sending position of described sound bite;
Judge whether described location parameter meets pre-conditioned, if described location parameter meets described pre-conditioned, then perform described instruction;
Wherein, described pre-conditioned at least comprise first pre-conditioned and second pre-conditioned, described first pre-conditioned corresponding with main driving position, described second pre-conditioned corresponding with co-driver.
Second aspect, provides a kind of mobile unit, and described equipment comprises:
Receiver module, for receiving the sound bite of user's input, described sound bite comprises at least one instruction;
Acquisition module, for obtaining the location parameter of the person of the sending position of described sound bite;
Judge module, pre-conditioned for judging whether described location parameter meets;
Execution module, for meet when described location parameter described pre-conditioned time, then perform described instruction;
Wherein, described pre-conditioned at least comprise first pre-conditioned and second pre-conditioned, described first pre-conditioned corresponding with main driving position, described second pre-conditioned corresponding with co-driver.
The invention discloses a kind of audio recognition method and mobile unit, comprising: the sound bite receiving user's input, sound bite comprises at least one instruction; Obtain the location parameter of the person of the sending position of sound bite; Judge whether location parameter meets pre-conditioned, if location parameter meets pre-conditioned, then perform instruction; Wherein, pre-conditionedly at least comprise first pre-conditioned and second pre-conditioned, first is pre-conditioned corresponding with main driving position, and second is pre-conditioned corresponding with co-driver.Thus use the method that the embodiment of the present invention can provide, whether meet pre-conditioned by the location parameter of the person of the sending position judging sound bite, if meet pre-conditioned, then perform the instruction comprised in sound bite, because sound bite comprises the location parameter that can describe this sound bite person of sending position, and pre-conditioned comprise first pre-conditioned and second pre-conditioned, and first is pre-conditioned corresponding with operator seat, second is pre-conditioned corresponding with co-driver, so, whether can meet pre-conditioned by the location parameter of the person of the sending position judging sound bite, determine that the position at the person of the sending place of this sound bite is main driving position or co-driver, with traditional judgement sound bite be operator seat send after perform compared with the instruction that comprises in this sound bite again, the method that the embodiment of the present invention provides not only can perform the personnel being arranged in main driving position send the instruction that sound bite comprises, can also perform the personnel being arranged in co-driver send the instruction that sound bite comprises, thus the control of personnel's realization to vehicle being positioned at co-driver can be made, improve the control efficiency of vehicle, add Consumer's Experience.In addition, the method that the embodiment of the present invention provides can identify that the position at the person of the sending place of sound bite is main driving position or co-driver, so, compared to traditional phonetic order that main driving position sends that can only identify, mobile unit is except can identifying the instruction of main driving position, the instruction of co-driver can also be identified, thus further increase phonetic recognization rate.Simultaneously, the embodiment of the present invention can identify and except the instruction that sends of the personnel except main driving position that perform, can also identify and the instruction that sends of the personnel performing co-driver, thus make the personnel of co-driver also can control mobile unit by sending phonetic order, thus can reduce due on the run, main driving position personnel frequently will send phonetic order and control mobile unit, cause and take sb's mind off sth and cause the problem of hidden trouble of traffic, and the method that the embodiment of the present invention provides can make the personnel being positioned at co-driver control vehicle by sending the sound bite comprising instruction, avoid operator seat and frequently will send phonetic control command, thus improve traffic safety.In addition, the method provided due to the embodiment of the present invention only identifies the sound bite comprising instruction that the personnel being positioned at main driving position and the personnel being positioned at co-driver send, the sound bite that the personnel that nonrecognition is arranged in other positions of vehicle send, thus avoid the personnel being arranged in other positions of vehicle and send phonetic order and can be identified and the scene of the wagon control confusion caused, not only ensure that the reliability of vehicle operating, improve the security of wagon control.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of interactive system schematic diagram that the embodiment of the present invention provides;
Fig. 2 is a kind of audio recognition method process flow diagram that the embodiment of the present invention provides;
Fig. 3 is a kind of audio recognition method process flow diagram that the embodiment of the present invention provides;
Fig. 4 is a kind of audio recognition method process flow diagram that the embodiment of the present invention provides;
Fig. 5 is a kind of mobile unit structural representation that the embodiment of the present invention provides.
Embodiment
In order to make the object, technical solutions and advantages of the present invention clearly, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
The present invention relates to a kind of audio recognition method, the method is applied to and a kind ofly at least comprises in the interactive system of vehicle, mobile unit, this interactive system can with reference to shown in Fig. 1, concrete, in this interactive system, mobile unit at least comprises audio-frequency module and reception/sending module, wherein, audio-frequency module at least comprises Audio Input Modules and dio Output Modules, and Audio Input Modules can comprise microphone, and dio Output Modules can comprise sound equipment; User can by the Audio Input Modules of mobile unit to this mobile unit input sound bite, and mobile unit by the display screen of self, can export the interface comprising the instruction that will perform to user.The executive agent of the audio recognition method that the embodiment of the present invention provides can be mobile unit.
Embodiment one
Embodiments provide a kind of audio recognition method, see Fig. 2, the method flow process comprises:
201, receive the sound bite of user's input, this sound bite comprises at least one instruction.
202, the location parameter of the person of the sending position of this sound bite is obtained.
Optionally, location parameter comprises distance parameter and angle parameter.
203, judge whether this location parameter meets pre-conditioned, if location parameter meets pre-conditioned, then perform this instruction.
Wherein, pre-conditionedly at least comprise first pre-conditioned and second pre-conditioned, first is pre-conditioned corresponding with main driving position, and second is pre-conditioned corresponding with co-driver.
Optionally, this process can be:
Judge that whether location parameter meets first pre-conditioned;
If location parameter meets first pre-conditioned, then perform this instruction;
If location parameter does not meet first pre-conditioned, then judge that whether satisfied location parameter is second pre-conditioned;
If location parameter meets second pre-conditioned, then perform instruction;
Otherwise, do not perform this instruction.
Optionally, before the sound bite receiving user's input, method also comprises:
According to the location parameter of main driving position and the location parameter of co-driver, arrange first pre-conditioned and second pre-conditioned;
Wherein, the first pre-conditioned distance parameter that comprises is more than or equal to the first distance threshold, be less than or equal to second distance threshold value, and angle parameter is more than or equal to the first angle threshold, is less than or equal to the second angle threshold;
The second pre-conditioned distance parameter that comprises is more than or equal to the 3rd distance threshold, be less than or equal to the 4th distance threshold, and angle parameter is more than or equal to the 3rd angle threshold, is less than or equal to the 4th angle threshold;
Optionally, the voice sub-pieces section that multiple user sends if comprise in sound bite, method also comprises:
Be separated by sound bite, obtain the voice sub-pieces section corresponding respectively with each user, voice sub-pieces section comprises M instruction;
Judge whether the location parameter of the person of the sending position of the N number of instruction in M instruction meets pre-conditioned, if meet pre-conditioned, then determine that M instruction meets pre-conditioned;
Wherein, M and N is positive integer.
Optionally, perform this instruction also to comprise:
According to the priority of instruction, perform instruction;
Wherein, priority is used to indicate the execution sequence of instruction, and the priority meeting the first pre-conditioned instruction is greater than the priority of satisfied second pre-conditioned instruction.
Embodiments provide a kind of audio recognition method, whether meet pre-conditioned by the location parameter of the person of the sending position judging sound bite, if meet pre-conditioned, then perform the instruction comprised in sound bite, because sound bite comprises the location parameter that can describe this sound bite person of sending position, and pre-conditioned comprise first pre-conditioned and second pre-conditioned, and first is pre-conditioned corresponding with operator seat, second is pre-conditioned corresponding with co-driver, so, whether can meet pre-conditioned by the location parameter of the person of the sending position judging sound bite, determine that the position at the person of the sending place of this sound bite is main driving position or co-driver, with traditional judgement sound bite be operator seat send after perform compared with the instruction that comprises in this sound bite again, the method that the embodiment of the present invention provides not only can perform the personnel being arranged in main driving position send the instruction that sound bite comprises, can also perform the personnel being arranged in co-driver send the instruction that sound bite comprises, thus the control of personnel's realization to vehicle being positioned at co-driver can be made, improve the control efficiency of vehicle, add Consumer's Experience.In addition, the method that the embodiment of the present invention provides can identify that the position at the person of the sending place of sound bite is main driving position or co-driver, so, compared to traditional phonetic order that main driving position sends that can only identify, mobile unit is except can identifying the instruction of main driving position, the instruction of co-driver can also be identified, thus further increase phonetic recognization rate.Simultaneously, the embodiment of the present invention can identify and except the instruction that sends of the personnel except main driving position that perform, can also identify and the instruction that sends of the personnel performing co-driver, thus make the personnel of co-driver also can control mobile unit by sending phonetic order, thus can reduce due on the run, main driving position personnel frequently will send phonetic order and control mobile unit, cause and take sb's mind off sth and cause the problem of hidden trouble of traffic, and the method that the embodiment of the present invention provides can make the personnel being positioned at co-driver control vehicle by sending the sound bite comprising instruction, avoid operator seat and frequently will send phonetic control command, thus improve traffic safety.In addition, the method provided due to the embodiment of the present invention only identifies the sound bite comprising instruction that the personnel being positioned at main driving position and the personnel being positioned at co-driver send, the sound bite that the personnel that nonrecognition is arranged in other positions of vehicle send, thus avoid the personnel being arranged in other positions of vehicle and send phonetic order and can be identified and the scene of the wagon control confusion caused, not only ensure that the reliability of vehicle operating, improve the security of wagon control.
Embodiment two
Embodiments provide a kind of audio recognition method, with reference to shown in Fig. 3, in embodiments of the present invention, the sound bite of user's input comprises the sound bite of first user input and the sound bite of the second user input, and first user can be the user being positioned at main driving position, second user can be the user being positioned at co-driver, especially, in embodiments of the present invention, the executive agent of the method is mobile unit, concrete, the method comprises:
301, arrange first pre-conditioned pre-conditioned with second.
Concrete, according to the location parameter of main driving position and the location parameter of co-driver, arrange first pre-conditioned and second pre-conditioned;
It should be noted that, first is pre-conditioned corresponding with main driving position, and second is pre-conditioned corresponding with co-driver.
Wherein, first pre-conditionedly can comprise:
Distance parameter is more than or equal to the first distance threshold, is less than or equal to second distance threshold value, and angle parameter is more than or equal to the first angle threshold, is less than or equal to the second angle threshold.
Second pre-conditionedly can comprise:
Distance parameter is more than or equal to the 3rd distance threshold, is less than or equal to the 4th distance threshold, and when the angle parameter of sound bite is more than or equal to the 3rd angle threshold, is less than or equal to the 4th angle threshold.
Especially, 4th distance threshold is less than or equal to the first distance threshold, and the 4th angle threshold is less than or equal to the first angle threshold, the first distance threshold is less than or equal to by arranging the 4th distance threshold, and the 4th angle threshold is less than or equal to the first angle threshold, make to identify the condition and range of the location parameter that phonetic order that the personnel of co-driver send is corresponding than identifying that the condition and range of the location parameter that phonetic order that the personnel of main driving position send is corresponding is less, and identify that the scope of the phonetic order of main driving position is larger, can avoid when driving due to the movable amplitude of the personnel of main driving position larger time, phonetic order identification situation not, thus improve phonetic recognization rate, little owing to identifying the scope of co-driver, thus scope of activities when limiting co-driver transmission phonetic order, make the personnel of co-driver will send phonetic order within the limits prescribed can be identified, always ensure that the phonetic order that the personnel that can identify main driving and copilot more accurately send, improve the accuracy rate of speech recognition.
Wherein, in vehicle, distance parameter is used to indicate the distance between the person of sending of instruction and the voice input module of mobile unit, and it is reference point that angle parameter is used to indicate with voice input module, and the person of sending of instruction is relative to the direction of mobile unit.
It should be noted that not each perform step 302 before all perform step 301, in actual use, mobile unit directly can use pre-set first pre-conditioned pre-conditioned with second.
302, receive the sound bite of user's input, this sound bite comprises at least one instruction.
Concrete, mobile unit can receive the sound bite of user's input by the voice input module of self, and this sound bite comprises at least one instruction, and this instruction is corresponding with wagon control action, such as, instruction " enabling " is corresponding with wagon control action car door opening.
303, from sound bite, at least one instruction is identified.
Concrete, mobile unit can be from sound bite, identify at least one instruction by the identification module of self, also can by the transmission/reception module of self, this sound bite is sent to server, from sound bite, at least one instruction is identified by server, again by the transmission/reception module of self, the information comprising this at least one instruction that reception server sends, the embodiment of the present invention is not limited concrete identification equipment.
Wherein, can by the keyword in this sound bite or key word and the keyword instruction of multiple indicators prestored or key word be contrasted, obtain the similarity before this keyword or key word and the keyword of multiple indicators prestored or key word, then from the keyword or key word of the multiple indicators prestored, the keyword that similarity is the highest or key word is determined, thus determine the instruction indicated by this keyword or key word, in addition, at least one instruction can also be identified by other means from sound bite, the embodiment of the present invention is not limited concrete recognition method.
304, the location parameter of the person of the sending position of sound bite is obtained.
Concrete, this location parameter comprises distance parameter and angle parameter.
Wherein, can obtain distance parameter corresponding to instruction in this sound bite according to the distance parameter decision model preset, its process can be:
By the instruction transmission range parameter decision model in this sound bite, then Output rusults and the multiple results preset are contrasted, determine that the distance parameter corresponding to the default result maximum with Output rusults similarity is the distance parameter of this instruction.In addition, can also obtain the instruction respective distances parameter in this sound bite by other means, the embodiment of the present invention is not limited concrete obtain manner.
Can obtain angle parameter corresponding to instruction in this sound bite according to the angle parameter decision model preset, its process can be:
By the instruction input angle parameter decision model in this sound bite, then Output rusults and the multiple results preset are contrasted, determine that the angle parameter corresponding to the default result maximum with Output rusults similarity is the angle parameter of this instruction.In addition, can also obtain the instruction corresponding angle parameter in this sound bite by other means, the embodiment of the present invention is not limited concrete obtain manner.
305, judge that whether location parameter meets first pre-conditioned, if location parameter meets first pre-conditioned, then perform step 306; If location parameter does not meet first pre-conditioned, then perform step 307.
Concrete, whether judging distance parameter is more than or equal to the first distance threshold, is less than or equal to second distance threshold value, and whether angle parameter is more than or equal to the first angle threshold, is less than or equal to the second angle threshold;
If distance parameter is more than or equal to the first distance threshold, be less than or equal to second distance threshold value, and angle parameter is more than or equal to the first angle threshold, be less than or equal to the second angle threshold, then judge that location parameter meets first pre-conditioned, otherwise, judge that location parameter does not meet first pre-conditioned.
306, perform this instruction, terminate.
307, judge that whether location parameter meets second pre-conditioned, if location parameter meets second pre-conditioned, then perform step 306, otherwise, then do not perform this instruction, terminate.
Concrete, whether judging distance parameter is more than or equal to the 3rd distance threshold, is less than or equal to the 4th distance threshold, and whether angle parameter is more than or equal to the 3rd angle threshold, is less than or equal to the 4th angle threshold;
If whether judging distance parameter is more than or equal to the 3rd distance threshold, be less than or equal to the 4th distance threshold, and whether angle parameter is more than or equal to the 3rd angle threshold, be less than or equal to the 4th angle threshold, then judge that location parameter meets second pre-conditioned, otherwise, judge that location parameter does not meet second pre-conditioned.
It should be noted that, step 305 to step 307 judges whether location parameter meets pre-conditioned process, can according to listed by the present embodiment, first judge that whether the location parameter of this instruction meets first pre-conditioned, this instruction location parameter meet first pre-conditioned time, judge that whether the location parameter of this instruction meets again second pre-conditioned, also first can judge that whether the location parameter of this instruction meets second pre-conditioned, this instruction location parameter meet second pre-conditioned time, judge that whether the location parameter of this instruction meets again first pre-conditioned, the embodiment of the present invention is not limited concrete execution sequence.
After performing step 302 to the operation of step 307 to this instruction, continue to perform the operation of step 302 to step 307 to the instruction comprised in next sound bite.
Optionally, the voice sub-pieces section that multiple user sends if comprise in sound bite, also comprises step 308 to step 3011,
308, be separated by sound bite, obtain the voice sub-pieces section corresponding respectively with each user, voice sub-pieces section comprises M instruction.
Concrete, this detachment process can be:
Obtain all voice sub-pieces sections, identify the vocal print parameter of each voice sub-pieces section correspondence respectively, voice sub-pieces section identical for vocal print parameter is identified as the voice sub-pieces section of same user input, obtain the voice sub-pieces section collection corresponding with each user, this voice sub-pieces section collection comprises all voice sub-pieces sections of same user input.
309, M the instruction that the voice sub-pieces section corresponding respectively with each user comprises is identified.
Wherein, M is positive integer, represents that voice sub-pieces section comprises one or more instruction.
Identify the phonetic order that all voice sub-pieces sections that voice sub-pieces section is concentrated comprise.
Wherein, can mate to come by phonetic order the phonetic order that all voice sub-pieces sections that voice sub-pieces section concentrates comprise, this process can comprise:
Obtain the keyword in each voice sub-pieces section, this keyword is for describing phonetic order, this keyword is mated with the multiple instruction keywords prestored, obtain this keyword and the similarity of multiple instructions prestored, if this similarity is more than or equal to predetermined threshold value, then determine the instruction that this keyword in this voice sub-pieces section is corresponding.
Obtain all instructions comprised in the voice sub-pieces section corresponding respectively with each user, this instruction is M.
3010, judge whether the location parameter of the person of the sending position of the N number of instruction in M instruction meets pre-conditioned, if meet pre-conditioned, then determine that M instruction meets pre-conditioned.
Wherein, N is positive integer, and N is less than M.
It should be noted that, when completing speech Separation, and when if one of them user inputs M instruction, then confirm N number of instruction location parameter meet first pre-conditioned after, the positional information of location parameter giving tacit consent to other instructions of this user input all meets first pre-conditioned or second pre-conditioned, thus whether location parameter without the need to confirming other instructions is more one by one satisfied first pre-conditioned or second pre-conditioned.
3011, M the instruction comprised in the voice sub-pieces section of this user is performed.
If comprise the voice sub-pieces section that multiple user sends in sound bite, the method that the embodiment of the present invention provides is by being separated this sound bite, obtain the voice sub-pieces section corresponding respectively with each user, thus identify the multiple phonetic orders comprised in voice sub-pieces section, judge whether the location parameter of the person of the sending position of at least one instruction in multiple instruction meets pre-conditioned, if meet pre-conditioned, then determine that the location parameter of the person of sending of multiple instruction meets pre-conditioned, thus can realize when multiple user sends phonetic order, to any one user wherein, when at least one in all phonetic orders that this user sends meets pre-conditioned, then judge that all the other phonetic orders of this user all meet pre-conditioned, thus without the need to all identifying all phonetic orders of this user, improve the efficiency of speech recognition, thus identify and meet phonetic order that pre-conditioned user sends and perform this instruction, improve phonetic recognization rate, add Consumer's Experience, and the phonetic order not meeting pre-conditioned user of location parameter after not performing speech Separation, avoid the personnel being arranged in other positions of vehicle to send phonetic order and can be identified and the scene of the wagon control confusion caused, not only ensure that the reliability of vehicle operating, also improve the security of wagon control.
Embodiments provide a kind of audio recognition method, whether meet pre-conditioned by the location parameter of the person of the sending position judging sound bite, if meet pre-conditioned, then perform the instruction comprised in sound bite, because sound bite comprises the location parameter that can describe this sound bite person of sending position, and pre-conditioned comprise first pre-conditioned and second pre-conditioned, and first is pre-conditioned corresponding with operator seat, second is pre-conditioned corresponding with co-driver, so, whether can meet pre-conditioned by the location parameter of the person of the sending position judging sound bite, determine that the position at the person of the sending place of this sound bite is main driving position or co-driver, with traditional judgement sound bite be operator seat send after perform compared with the instruction that comprises in this sound bite again, the method that the embodiment of the present invention provides not only can perform the personnel being arranged in main driving position send the instruction that sound bite comprises, can also perform the personnel being arranged in co-driver send the instruction that sound bite comprises, thus the control of personnel's realization to vehicle being positioned at co-driver can be made, improve the control efficiency of vehicle, add Consumer's Experience.In addition, the method that the embodiment of the present invention provides can identify that the position at the person of the sending place of sound bite is main driving position or co-driver, so, compared to traditional phonetic order that main driving position sends that can only identify, mobile unit is except can identifying the instruction of main driving position, the instruction of co-driver can also be identified, thus further increase phonetic recognization rate.Simultaneously, the embodiment of the present invention can identify and except the instruction that sends of the personnel except main driving position that perform, can also identify and the instruction that sends of the personnel performing co-driver, thus make the personnel of co-driver also can control mobile unit by sending phonetic order, thus can reduce due on the run, main driving position personnel frequently will send phonetic order and control mobile unit, cause and take sb's mind off sth and cause the problem of hidden trouble of traffic, and the method that the embodiment of the present invention provides can make the personnel being positioned at co-driver control vehicle by sending the sound bite comprising instruction, avoid operator seat and frequently will send phonetic control command, thus improve traffic safety.In addition, the method provided due to the embodiment of the present invention only identifies the sound bite comprising instruction that the personnel being positioned at main driving position and the personnel being positioned at co-driver send, the sound bite that the personnel that nonrecognition is arranged in other positions of vehicle send, thus avoid the personnel being arranged in other positions of vehicle and send phonetic order and can be identified and the scene of the wagon control confusion caused, not only ensure that the reliability of vehicle operating, improve the security of wagon control.
Embodiment three
Embodiments provide a kind of audio recognition method, with reference to shown in Fig. 4, in embodiments of the present invention, the sound bite of user's input can comprise the voice sub-pieces section of multiple user input, and first is pre-conditioned corresponding with the first priority, second is pre-conditioned corresponding with the second priority, wherein, priority is used to indicate the execution sequence of instruction, the priority meeting the first pre-conditioned instruction is greater than the priority of satisfied second pre-conditioned instruction, the phonetic order received in the embodiment of the present invention at least comprises the phonetic order that two users send respectively, the method comprises:
401, arrange first pre-conditioned pre-conditioned with second.
Concrete, this step is identical with the step 301 in embodiment two, is or else repeated herein.
402, receive the sound bite of multiple user input, in this sound bite, comprise the voice sub-pieces section that multiple user sends.
Concrete, the mode receiving the sound bite of each user's input in multiple user in this step is identical with the step 302 in embodiment two, is or else repeated herein.
403, be separated by sound bite, obtain the voice sub-pieces section corresponding respectively with each user, voice sub-pieces section comprises M instruction.
Concrete, this step is identical with the step 308 in embodiment two, is or else repeated herein.
404, M the instruction that the voice sub-pieces section corresponding respectively with each user comprises is identified.
Concrete, this step is identical with the step 309 in embodiment two, is or else repeated herein.
405, the location parameter of the person of the sending position of voice sub-pieces section is obtained.
Concrete, because sound bite comprises multiple voice sub-pieces section, so the location parameter obtaining the person of the sending position of each voice sub-pieces section is identical with the step 304 in embodiment two, or else repeated herein.
406, judge that whether the location parameter of the person of the sending position of the N number of instruction in M instruction meets first pre-conditioned, if so, then continue to perform step 405 to the voice sub-pieces section of next user; If not, then perform step 407.
If N number of instruction meets pre-conditioned, then determine that M instruction meets pre-conditioned.
Wherein, N is positive integer, and N is less than M.
Whether concrete, judge satisfied with the location parameter of the N number of instruction in M the instruction that each user is corresponding respectively first pre-conditioned and second pre-conditioned respectively, N can be any one multiple instruction of M instruction.
It should be noted that, when completing speech Separation, and during first user input M instruction, then confirm N number of instruction location parameter meet first pre-conditioned after, the positional information of location parameter giving tacit consent to other instructions of this user input all meets first pre-conditioned or second pre-conditioned, thus whether location parameter without the need to confirming other instructions is more one by one satisfied first pre-conditioned or second pre-conditioned.
407, judge that whether the location parameter of the person of the sending position of the N number of instruction in M instruction meets second pre-conditioned, if so, then continue to perform step 405 to the voice sub-pieces section of next user; If not, then ignore this voice sub-pieces section, continue to perform step 405 to the voice sub-pieces section of next user.
If after all users all execution of step 405 to step 407, perform step 408.
408, according to the priority of instruction, instruction is performed.
Wherein, priority is used to indicate the execution sequence of instruction, and the priority meeting the first pre-conditioned instruction is greater than the priority of satisfied second pre-conditioned instruction.
Concrete, preferential execution meets the first pre-conditioned instruction, after executing satisfied first pre-conditioned instruction, meets the second pre-conditioned instruction performing.
The second priority corresponding to satisfied second pre-conditioned instruction is greater than owing to meeting the first priority corresponding to the first pre-conditioned instruction, so correspondence, first perform and meet the first pre-conditioned instruction, then execution meets the second pre-conditioned instruction, terminates.
Optionally, the phonetic order that the personnel of other operator seats send can also be identified, the location parameter of the phonetic order sent by the personnel obtaining other operator seats, but the priority that the priority performing the phonetic order that the personnel except other operator seats of main driving position and co-driver send performs lower than the phonetic order that the personnel of co-driver send.
Thus make mobile unit preferentially can perform satisfied first pre-conditioned instruction, perform again and meet the second pre-conditioned instruction, namely the instruction meeting operator seat is first performed, performing the instruction meeting co-driver, simultaneously, except the execution priority of the phonetic order of other operator seats of main driving position and co-driver is lower than the priority of co-driver, thus ensure that the priority control of main driving position to vehicle, avoid when the phonetic order of multiple operator seat all meets pre-conditioned, vehicle-mounted setting cannot confirm preferentially to perform the instruction sent by whom and the confusion caused, ensure that the security of driving.
Embodiments provide a kind of audio recognition method, whether meet pre-conditioned by the location parameter of the person of the sending position judging sound bite, if meet pre-conditioned, then perform the instruction comprised in sound bite, because sound bite comprises the location parameter that can describe this sound bite person of sending position, and pre-conditioned comprise first pre-conditioned and second pre-conditioned, and first is pre-conditioned corresponding with operator seat, second is pre-conditioned corresponding with co-driver, so, whether can meet pre-conditioned by the location parameter of the person of the sending position judging sound bite, determine that the position at the person of the sending place of this sound bite is main driving position or co-driver, with traditional judgement sound bite be operator seat send after perform compared with the instruction that comprises in this sound bite again, the method that the embodiment of the present invention provides not only can perform the personnel being arranged in main driving position send the instruction that sound bite comprises, can also perform the personnel being arranged in co-driver send the instruction that sound bite comprises, thus the control of personnel's realization to vehicle being positioned at co-driver can be made, improve the control efficiency of vehicle, add Consumer's Experience.In addition, the method that the embodiment of the present invention provides can identify that the position at the person of the sending place of sound bite is main driving position or co-driver, so, compared to traditional phonetic order that main driving position sends that can only identify, mobile unit is except can identifying the instruction of main driving position, the instruction of co-driver can also be identified, thus further increase phonetic recognization rate.Simultaneously, the embodiment of the present invention can identify and except the instruction that sends of the personnel except main driving position that perform, can also identify and the instruction that sends of the personnel performing co-driver, thus make the personnel of co-driver also can control mobile unit by sending phonetic order, thus can reduce due on the run, main driving position personnel frequently will send phonetic order and control mobile unit, cause and take sb's mind off sth and cause the problem of hidden trouble of traffic, and the method that the embodiment of the present invention provides can make the personnel being positioned at co-driver control vehicle by sending the sound bite comprising instruction, avoid operator seat and frequently will send phonetic control command, thus improve traffic safety.In addition, the method provided due to the embodiment of the present invention only identifies the sound bite comprising instruction that the personnel being positioned at main driving position and the personnel being positioned at co-driver send, the sound bite that the personnel that nonrecognition is arranged in other positions of vehicle send, thus avoid the personnel being arranged in other positions of vehicle and send phonetic order and can be identified and the scene of the wagon control confusion caused, not only ensure that the reliability of vehicle operating, improve the security of wagon control.In addition, the embodiment of the present invention is stricter than the speech conditions performing main driving position by the condition arranging the phonetic order performing co-driver, make mobile unit preferentially can perform satisfied first pre-conditioned instruction, perform again and meet the second pre-conditioned instruction, namely the instruction meeting operator seat is first performed, performing the instruction meeting co-driver, simultaneously, except the execution priority of the phonetic order of other operator seats of main driving position and co-driver is lower than the priority of co-driver, thus ensure that the priority control of main driving position to vehicle, avoid when the phonetic order of multiple operator seat all meets pre-conditioned, vehicle-mounted setting cannot confirm preferentially to perform the instruction sent by whom and the confusion caused, ensure that the security of driving, simultaneously, other avoid operator seat and send phonetic order to mobile unit, improve phonetic recognization rate.
Embodiment four
Embodiments provide a kind of mobile unit 5, with reference to Fig. 5, this equipment comprises:
Receiver module 51, for receiving the sound bite of user's input, sound bite comprises at least one instruction;
Acquisition module 52, for obtaining the location parameter of the person of the sending position of sound bite;
Judge module 53, pre-conditioned for judging whether location parameter meets;
Execution module 54, for when location parameter meets pre-conditioned, then performs instruction;
Wherein, pre-conditionedly at least comprise first pre-conditioned and second pre-conditioned, first is pre-conditioned corresponding with main driving position, and second is pre-conditioned corresponding with co-driver.
Optionally, equipment also comprises:
Module is set, for according to the location parameter of main driving position and the location parameter of co-driver, arranges first pre-conditioned and second pre-conditioned;
Wherein, the first pre-conditioned distance parameter that comprises is more than or equal to the first distance threshold, be less than or equal to second distance threshold value, and angle parameter is more than or equal to the first angle threshold, is less than or equal to the second angle threshold;
The second pre-conditioned distance parameter that comprises is more than or equal to the 3rd distance threshold, be less than or equal to the 4th distance threshold, and angle parameter is more than or equal to the 3rd angle threshold, is less than or equal to the 4th angle threshold.
Optionally,
Judge module 53 is also first pre-conditioned for judging that whether location parameter meets;
Execution module 54 also for when location parameter meet first pre-conditioned time, then perform instruction;
Judge module 53 also for do not meet when location parameter first pre-conditioned time, judge that location parameter is whether satisfied second pre-conditioned;
Execution module 54 also for when location parameter meet second pre-conditioned time, then perform instruction;
Execution module 54 also for when location parameter meet second pre-conditioned time, then do not perform instruction.
Optionally, equipment also comprises:
Speech Separation module, for when comprising the voice sub-pieces section that multiple user sends in sound bite, is separated sound bite, and trigger acquisition module and obtain the voice sub-pieces section corresponding respectively with each user, voice sub-pieces section comprises M instruction;
Judge module 53 also comprises determines submodule, determines whether submodule meets pre-conditioned for the location parameter of the person of the sending position judging the N number of instruction in M instruction, when meeting pre-conditioned, then determines that M instruction meets pre-conditioned;
Wherein, M and N is positive integer.
Optionally, execution module 54 also specifically for:
According to the priority of instruction, perform instruction;
Wherein, priority is used to indicate the execution sequence of instruction, and the priority meeting the first pre-conditioned instruction is greater than the priority of satisfied second pre-conditioned instruction.
The invention discloses a kind of mobile unit, whether this mobile unit meets pre-conditioned by the location parameter of the person of the sending position judging sound bite, if meet pre-conditioned, then perform the instruction comprised in sound bite, because sound bite comprises the location parameter that can describe this sound bite person of sending position, and pre-conditioned comprise first pre-conditioned and second pre-conditioned, and first is pre-conditioned corresponding with operator seat, second is pre-conditioned corresponding with co-driver, so, whether can meet pre-conditioned by the location parameter of the person of the sending position judging sound bite, determine that the position at the person of the sending place of this sound bite is main driving position or co-driver, with traditional judgement sound bite be operator seat send after perform compared with the instruction that comprises in this sound bite again, the method that the embodiment of the present invention provides not only can perform the personnel being arranged in main driving position send the instruction that sound bite comprises, can also perform the personnel being arranged in co-driver send the instruction that sound bite comprises, thus the control of personnel's realization to vehicle being positioned at co-driver can be made, improve the control efficiency of vehicle, add Consumer's Experience.In addition, the method that the embodiment of the present invention provides can identify that the position at the person of the sending place of sound bite is main driving position or co-driver, so, compared to traditional phonetic order that main driving position sends that can only identify, mobile unit is except can identifying the instruction of main driving position, the instruction of co-driver can also be identified, thus further increase phonetic recognization rate.Simultaneously, the embodiment of the present invention can identify and except the instruction that sends of the personnel except main driving position that perform, can also identify and the instruction that sends of the personnel performing co-driver, thus make the personnel of co-driver also can control mobile unit by sending phonetic order, thus can reduce due on the run, main driving position personnel frequently will send phonetic order and control mobile unit, cause and take sb's mind off sth and cause the problem of hidden trouble of traffic, and the method that the embodiment of the present invention provides can make the personnel being positioned at co-driver control vehicle by sending the sound bite comprising instruction, avoid operator seat and frequently will send phonetic control command, thus improve traffic safety.In addition, the method provided due to the embodiment of the present invention only identifies the sound bite comprising instruction that the personnel being positioned at main driving position and the personnel being positioned at co-driver send, the sound bite that the personnel that nonrecognition is arranged in other positions of vehicle send, thus avoid the personnel being arranged in other positions of vehicle and send phonetic order and can be identified and the scene of the wagon control confusion caused, not only ensure that the reliability of vehicle operating, improve the security of wagon control.
It should be noted that " first " and " second " in the embodiment of the present invention is exemplary, be only used to the difference both distinguishing, not refer in particular to herein.
It should be noted that: the mobile unit that above-described embodiment provides is when carrying out speech recognition, only be illustrated with the division of above-mentioned each functional module, in practical application, can distribute as required and by above-mentioned functions and be completed by different functional modules, inner structure by equipment is divided into different functional modules, to complete all or part of function described above.In addition, the mobile unit that above-described embodiment provides and audio recognition method embodiment belong to same design, and its specific implementation process refers to embodiment of the method, repeats no more here.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can have been come by hardware, the hardware that also can carry out instruction relevant by program completes, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium mentioned can be ROM (read-only memory), disk or CD etc.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. an audio recognition method, is characterized in that, described method comprises:
Receive the sound bite of user's input, described sound bite comprises at least one instruction;
Obtain the location parameter of the person of the sending position of described sound bite;
Judge whether described location parameter meets pre-conditioned, if described location parameter meets described pre-conditioned, then perform described instruction;
Wherein, described pre-conditioned at least comprise first pre-conditioned and second pre-conditioned, described first pre-conditioned corresponding with main driving position, described second pre-conditioned corresponding with co-driver.
2. method according to claim 1, is characterized in that, described location parameter comprises distance parameter and angle parameter, and described method also comprises:
According to the location parameter of described main driving position and the location parameter of described co-driver, arrange described first pre-conditioned and described second pre-conditioned;
Wherein, described first pre-conditionedly comprises described distance parameter and is more than or equal to the first distance threshold, be less than or equal to second distance threshold value, and described angle parameter is more than or equal to the first angle threshold, is less than or equal to the second angle threshold;
Described second pre-conditionedly comprises described distance parameter and is more than or equal to the 3rd distance threshold, be less than or equal to the 4th distance threshold, and described angle parameter is more than or equal to the 3rd angle threshold, is less than or equal to the 4th angle threshold.
3. method according to claim 2, is characterized in that, described to judge whether described location parameter meets pre-conditioned, if described location parameter meets described pre-conditioned, then performs described instruction and comprise:
Judge whether described location parameter meets described first pre-conditioned;
If described location parameter meets described first pre-conditioned, then perform described instruction;
If described location parameter does not meet described first pre-conditioned, then judge whether described location parameter meets described second pre-conditioned;
If described location parameter meets described second pre-conditioned, then perform described instruction;
If described location parameter does not meet described second pre-conditioned, then do not perform described instruction.
4. the method according to claim 1 or 3, is characterized in that, described method also comprises:
If comprise the voice sub-pieces section that multiple user sends in described sound bite, be then separated by described sound bite, obtain the voice sub-pieces section corresponding respectively with each user, described voice sub-pieces section comprises M instruction;
Judge whether the location parameter of the person of the sending position of the N number of instruction in a described M instruction meets described pre-conditioned, if meet described pre-conditioned, then determine that a described M instruction meets described pre-conditioned;
Wherein, described M and described N is positive integer.
5. method according to claim 4, is characterized in that, the described instruction of described execution comprises:
According to the priority of described instruction, perform described instruction;
Wherein, described priority is used to indicate the execution sequence of described instruction, and the priority meeting described first pre-conditioned instruction is greater than the priority meeting described second pre-conditioned instruction.
6. a mobile unit, is characterized in that, described equipment comprises:
Receiver module, for receiving the sound bite of user's input, described sound bite comprises at least one instruction;
Acquisition module, for obtaining the location parameter of the person of the sending position of described sound bite;
Judge module, pre-conditioned for judging whether described location parameter meets;
Execution module, for meet when described location parameter described pre-conditioned time, then perform described instruction;
Wherein, described pre-conditioned at least comprise first pre-conditioned and second pre-conditioned, described first pre-conditioned corresponding with main driving position, described second pre-conditioned corresponding with co-driver.
7. equipment according to claim 6, is characterized in that, described equipment also comprises:
Module is set, for according to the location parameter of described main driving position and the location parameter of described co-driver, arranges described first pre-conditioned and described second pre-conditioned;
Wherein, described first pre-conditionedly comprises described distance parameter and is more than or equal to the first distance threshold, be less than or equal to second distance threshold value, and described angle parameter is more than or equal to the first angle threshold, is less than or equal to the second angle threshold;
Described second pre-conditionedly comprises described distance parameter and is more than or equal to the 3rd distance threshold, be less than or equal to the 4th distance threshold, and described angle parameter is more than or equal to the 3rd angle threshold, is less than or equal to the 4th angle threshold.
8. equipment according to claim 7, is characterized in that,
Described judge module is also described first pre-conditioned for judging whether described location parameter meets;
Described execution module also for meet when described location parameter described first pre-conditioned time, then perform described instruction;
Described judge module also for do not meet when described location parameter described first pre-conditioned time, judge whether described location parameter meets described second pre-conditioned;
Described execution module also for meet when described location parameter described second pre-conditioned time, then perform described instruction;
Described execution module also for do not meet when described location parameter described second pre-conditioned time, then do not perform described instruction.
9. the equipment according to claim 6 or 8, is characterized in that, described equipment also comprises:
Speech Separation module, for when comprising the voice sub-pieces section that multiple user sends in described sound bite, be separated by described sound bite, trigger acquisition module and obtain the voice sub-pieces section corresponding respectively with each user, described voice sub-pieces section comprises M instruction;
Described judge module also comprises determines submodule, describedly determine whether submodule meets described pre-conditioned for the location parameter of the person of the sending position judging the N number of instruction in a described M instruction, when meet described pre-conditioned time, then determine that a described M instruction meets described pre-conditioned;
Wherein, described M and described N is positive integer.
10. equipment according to claim 9, is characterized in that,
Described execution module, also for the priority according to described instruction, performs described instruction;
Wherein, described priority is used to indicate the execution sequence of described instruction, and the priority meeting described first pre-conditioned instruction is greater than the priority meeting described second pre-conditioned instruction.
CN201510217312.2A 2015-04-30 2015-04-30 Speech recognition method and vehicle-mounted device Pending CN104966514A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510217312.2A CN104966514A (en) 2015-04-30 2015-04-30 Speech recognition method and vehicle-mounted device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510217312.2A CN104966514A (en) 2015-04-30 2015-04-30 Speech recognition method and vehicle-mounted device

Publications (1)

Publication Number Publication Date
CN104966514A true CN104966514A (en) 2015-10-07

Family

ID=54220542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510217312.2A Pending CN104966514A (en) 2015-04-30 2015-04-30 Speech recognition method and vehicle-mounted device

Country Status (1)

Country Link
CN (1) CN104966514A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107219838A (en) * 2016-03-21 2017-09-29 法雷奥照明公司 With the sound illuminated for vehicle interior and/or the control device of gesture recognition and method
CN108231075A (en) * 2017-12-29 2018-06-29 北京视觉世界科技有限公司 Control method, device, equipment and the storage medium of cleaning equipment
CN108376058A (en) * 2018-02-09 2018-08-07 斑马网络技术有限公司 Sound control method and device and electronic equipment and storage medium
CN109102803A (en) * 2018-08-09 2018-12-28 珠海格力电器股份有限公司 Control method, device, storage medium and the electronic device of household appliance
CN109658922A (en) * 2017-10-12 2019-04-19 现代自动车株式会社 The device and method for handling user's input of vehicle
CN110556113A (en) * 2018-05-15 2019-12-10 上海博泰悦臻网络技术服务有限公司 Vehicle control method based on voiceprint recognition and cloud server
CN111145744A (en) * 2019-12-20 2020-05-12 长兴博泰电子科技股份有限公司 Ad-hoc network-based intelligent household voice control recognition method
CN111152732A (en) * 2018-11-07 2020-05-15 宝沃汽车(中国)有限公司 Adjusting method of display screen in vehicle, display screen rotating assembly in vehicle and vehicle
CN113744728A (en) * 2021-08-31 2021-12-03 阿波罗智联(北京)科技有限公司 Voice processing method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1815556A (en) * 2005-02-01 2006-08-09 松下电器产业株式会社 Method and system capable of operating and controlling vehicle using voice instruction
EP2028061A2 (en) * 2007-08-23 2009-02-25 Delphi Technologies, Inc. System and method of controlling personalized settings in a vehicle
CN102707262A (en) * 2012-06-20 2012-10-03 太仓博天网络科技有限公司 Sound localization system based on microphone array
CN104572258A (en) * 2013-10-18 2015-04-29 通用汽车环球科技运作有限责任公司 Methods and apparatus for processing multiple audio streams at vehicle onboard computer system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1815556A (en) * 2005-02-01 2006-08-09 松下电器产业株式会社 Method and system capable of operating and controlling vehicle using voice instruction
EP2028061A2 (en) * 2007-08-23 2009-02-25 Delphi Technologies, Inc. System and method of controlling personalized settings in a vehicle
CN102707262A (en) * 2012-06-20 2012-10-03 太仓博天网络科技有限公司 Sound localization system based on microphone array
CN104572258A (en) * 2013-10-18 2015-04-29 通用汽车环球科技运作有限责任公司 Methods and apparatus for processing multiple audio streams at vehicle onboard computer system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107219838A (en) * 2016-03-21 2017-09-29 法雷奥照明公司 With the sound illuminated for vehicle interior and/or the control device of gesture recognition and method
CN109658922A (en) * 2017-10-12 2019-04-19 现代自动车株式会社 The device and method for handling user's input of vehicle
CN109658922B (en) * 2017-10-12 2023-10-10 现代自动车株式会社 Apparatus and method for processing user input for vehicle
CN108231075A (en) * 2017-12-29 2018-06-29 北京视觉世界科技有限公司 Control method, device, equipment and the storage medium of cleaning equipment
CN108376058A (en) * 2018-02-09 2018-08-07 斑马网络技术有限公司 Sound control method and device and electronic equipment and storage medium
CN110556113A (en) * 2018-05-15 2019-12-10 上海博泰悦臻网络技术服务有限公司 Vehicle control method based on voiceprint recognition and cloud server
CN109102803A (en) * 2018-08-09 2018-12-28 珠海格力电器股份有限公司 Control method, device, storage medium and the electronic device of household appliance
CN111152732A (en) * 2018-11-07 2020-05-15 宝沃汽车(中国)有限公司 Adjusting method of display screen in vehicle, display screen rotating assembly in vehicle and vehicle
CN111145744A (en) * 2019-12-20 2020-05-12 长兴博泰电子科技股份有限公司 Ad-hoc network-based intelligent household voice control recognition method
CN113744728A (en) * 2021-08-31 2021-12-03 阿波罗智联(北京)科技有限公司 Voice processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN104966514A (en) Speech recognition method and vehicle-mounted device
CN101951553B (en) Navigation method and system based on speech command
CN108735215A (en) Interactive system for vehicle-mounted voice, method, equipment and storage medium
US7437297B2 (en) Systems and methods for predicting consequences of misinterpretation of user commands in automated systems
CN107204185A (en) Vehicle-mounted voice exchange method, system and computer-readable recording medium
CN105551494A (en) Mobile phone interconnection-based vehicle-mounted speech recognition system and recognition method
CN105867179A (en) Vehicle-borne voice control method, device and equipment
CN104850114A (en) Vehicle failure analyzing method and system
CN201830294U (en) Navigation system and navigation server based on voice command
KR20160027728A (en) Apparatus and method for controlling device of vehicle for user customized service
CN103591947B (en) The voice background navigation method of mobile terminal and mobile terminal
US11189274B2 (en) Dialog processing system, vehicle having the same, dialog processing method
CN102202082A (en) Vehicle-mounted communication system and method
CN102930868A (en) Identity recognition method and device
CN104661150A (en) Apparatus and method for recognizing voice
CN105609105A (en) Speech recognition system and speech recognition method
CN104751843A (en) Voice service switching method and voice service switching system
CN105825848A (en) Method, device and terminal for voice recognition
CN103680505A (en) Voice recognition method and voice recognition system
CN107444317A (en) Vehicle dormer window control method and system
CN105575402A (en) Network teaching real time voice analysis method
CN105227557A (en) A kind of account number processing method and device
KR20140067687A (en) Car system for interactive voice recognition
CN109545203A (en) Audio recognition method, device, equipment and storage medium
CN111833870A (en) Awakening method and device of vehicle-mounted voice system, vehicle and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160120

Address after: Nanshan District Keyuan Road Shenzhen city Guangdong province 518000 No. 6 Science Park East Arts crafts emporium 606A

Applicant after: Shenzhen Vcyber Technology Co., Ltd.

Address before: 100000 Beijing, Haidian District, high road, No. 1, No. 2, building 1, floor 102-105

Applicant before: Beijing Vcyber Technology Co., Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 East 606A Science Park Industrial Building, No. 6 Keyuan Road, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Car sound intelligent technology Co., Ltd.

Address before: 518000 East 606A Science Park Industrial Building, No. 6 Keyuan Road, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen Vcyber Technology Co., Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151007