CN108665893A - Vehicle-mounted audio response system and method - Google Patents
Vehicle-mounted audio response system and method Download PDFInfo
- Publication number
- CN108665893A CN108665893A CN201810295194.0A CN201810295194A CN108665893A CN 108665893 A CN108665893 A CN 108665893A CN 201810295194 A CN201810295194 A CN 201810295194A CN 108665893 A CN108665893 A CN 108665893A
- Authority
- CN
- China
- Prior art keywords
- acoustic information
- vehicle
- module
- audio response
- mounted audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/005—Language recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
Abstract
The present invention provides a vehicle-mounted audio response system, including:One radio reception component;One processor;An and speech database, the wherein described radio reception component further comprises an acquisition module and a memory module, the wherein described acquisition module receives an acoustic information from vehicle, the wherein described acoustic information is stored in the memory module, the wherein described processor is communicatively coupled with the memory module, the corresponding meaning in the speech database to match with the acoustic information is found in turn, and the acoustic information is reflected according to decision mode.In addition, the present invention further provides a vehicle-mounted audio response methods.
Description
Technical field
The present invention relates to a vehicle-mounted voice cognitive system, are especially acquired, handle and feed back to sound in the car
Voice system and method.
Background technology
Multi-information fusion has become automobile industry and controls the research and development forward position of industry.In order to steadily carry out automobile
Intelligent control, numerous manufacturers and mechanism are improved for the problem of driving.Since for car steering, safety
Property is most important.The target of multi-information fusion is exactly that can break through the defect of mankind itself, can recognize and arrive the mankind to cognition
Less than information.For example, the limitation of pilot's line of vision, the delay of respond, stiff in the movements, the even traffic of emergency
The real-time update of system.After the quick executive capability using automobile itself, by the understanding of automobile calculate control accordingly with
Dynamic system, it is prodigious to improve the limitation artificially driven.But the simple control by automobile itself, for control algolithm
Stringency have prodigious requirement.In particular, the Driving control of full-automatic automobile.Do not have also in social public transit facility
Have it is universal it is intelligentized in the case of, the implementation of intelligent automobile is also restrained.Moreover, sound is controlled as artificial, generally all
The action that preferentially can viva voce require.So, the control signal of this high priority is once wrong or control with automobile
It is formed with conflict, detrimental effect is had for the control of entire automobile.
Currently, safety cannot be very guarantee in the case of, vehicle-mounted voice system is not related to driving safety
The application of aspect, or be not the core of driving instruction.But undeniably, communication side of the voice as liberation body
Formula, the exchange of people's vehicle can preferably be completed by effectively utilizing sound.Especially, with the development of in-vehicle processor, have more
Resource can serve the identification, processing and feedback of sound so that the processing of voice signal becomes feasible.
Because of the complexity of car steering environment, the application of sound control under steam has also suffered from limitation.But such as
One voice system of fruit simultaneously can have good performance by environment in the car, and being directly coupled with traveling control aspect can only increase not
Peace.In addition, traditional sound control does not customize function, typically in producer's initializing set.But it is practised according to driving
Be used to, accent of speaking, environment inside car difference, for sound processing needs make a change.For each sound control
Implementing result is also only recorded in daily record at present.If but if generation accident, all steps that have all been late.With other
Unlike voice system, vehicle-mounted voice is more the language of instruction type, then dull identification method can not meet car
The needs of identification.And sound bank in other circumstances also is difficult to ideally incorporate in environment.That is, people are also
Reaction of the automobile to sound in environment cannot be rested in well.Response situation of the automobile to voice is only known about, below
One step carries out the research and development of control direction in which can just have foundation.
All it is that current vehicle-mounted voice is unwelcome above, cannot effectively carries out the cause of application.Therefore, it on the one hand wants
The reliability of vehicle-mounted voice recognition is improved by application, another side also will develop sound processing system for environment inside car.
Invention content
It is an object of the present invention to provide a kind of vehicle-mounted audio response system and method, by a vehicle to one
Acoustic information is received and is handled, and is carried out the matching analysis according to a speech database, is fed back the meaning of the acoustic information.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and methods, are carried out to the acoustic information
Analysis, provides corresponding evaluation so that user obtains reaction of the vehicle to the acoustic information according to decision mode.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and methods, according to the acoustic information
Meaning and corresponding evaluation, obtain corresponding handling result.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and method, the mode quilts of Decision-Making Evaluation
It sets in advance so that the acoustic information can be converted into the information of definite meaning, and then complete scoring, translation, answer
Etc. series reaction.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and methods, are done to the acoustic information
Filter, feature extraction, enhanced processing exclude various influence of noises under steam, to the tone and meaning of the acoustic information.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and methods, do and remember to the acoustic information
Record uses in subsequent analysis and feedback, the system is made to have high adaptivity.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and methods, accumulate the acoustic information
With corresponding meaning, pairs of mapping relations is mutually formed, and then form a user library, preserve individualized voice database.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and methods, for the acoustic information
Decision-Making Evaluation can by using a geography information speech database of calling section so that data to be treated
Amount is reduced, reduces intractability and saves processing time.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and method, the acoustic information is determined
Plan evaluates the reaction as the vehicle to the acoustic information, in addition to matching literal meaning, and then passes through the vehicle
It reacts and recognizes the vehicle.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and methods, for the acoustic information
Decision-Making Evaluation can be as the basis of language learning and application so that user can carry out certain study or joy in the car
Happy activity.
It is another object of the present invention to provide a kind of vehicle-mounted audio response system and methods, for the acoustic information
Substantially a result or evaluation can be provided so that the vehicle exports the reaction of the acoustic information.
One side under this invention provides a vehicle-mounted audio response system, including:
One radio reception component;
One processor;And
One speech database, wherein the radio reception component further comprises an acquisition module and a memory module, wherein institute
It states acquisition module and receives an acoustic information from vehicle, wherein the acoustic information is stored in the memory module, wherein institute
It states processor and is communicatively coupled with the memory module, and then find the voice data to match with the acoustic information
Corresponding meaning in library, and the acoustic information is reflected according to decision mode.
One embodiment under this invention, the speech database includes a pattern library and a user library, wherein the base
Quasi- library is stored certain standard language data in advance, wherein the user library be written into the acoustic information and it is corresponding certainly
Plan result.
One embodiment under this invention, the processor extract the acoustic information and are selected from tone color, the group of tone, content
At least one information type of conjunction.
One embodiment under this invention, the information that the acoustic information is extracted is by the processor in the voice
It is matched in database, wherein matching result is the matching value corresponding to the acoustic information.
One embodiment under this invention, the matching value are evaluated or are exported according to corresponding decision.
One embodiment under this invention, the radio reception component further comprise a pre-identification module, wherein the pre- knowledge
Other module carries out source judgement to the acoustic information.
One embodiment under this invention, if the pre-identification module judges the acoustic information from voice, to need
The acoustic information is further matched.
One embodiment under this invention, the processor include an identification module, a matching module and a decision model
Block, wherein the acoustic information that the identification module is acquired the acquisition module carries out the identification of sound and meaning, wherein institute
It is that the acoustic information generates the matching value that matching module, which is stated, by calling the speech database, wherein the decision-making module
The result of decision is obtained according to the matching value of the matching module.
One embodiment under this invention, the processor includes further a locating module, wherein the positioning mould
Block provides a geography information for the matching module, wherein according to the geography information, to the used speech database
Middle diminution matching range.
One embodiment under this invention, the processor includes further a setting interface, wherein from the setting
Interface is loaded into specific execution parameter to the identification module, the matching module and the decision-making module.
One embodiment under this invention, the processing of the identification module, the matching module and the decision-making module
Parameter is set default value in advance, wherein being joined to the matching module and the decision-making module by the setting interface
Several modifications
One embodiment under this invention, by the setting interface to the place of the matching module and the decision-making module
The accuracy and speed of reason is modified.
One embodiment under this invention, the exhibition method of the result of decision is shows by image.
One embodiment under this invention, decision is by being selected from console display, seat display or mobile terminal
Combination in one or several kinds and show.
One embodiment under this invention, the identification module include a sound recognition unit and a meaning recognition unit, wherein
The sound recognition unit carries out the acoustic information feature extraction of tone, wherein the other unit of consciousness believes the sound
Breath carries out semantic feature extraction.
The tone of one embodiment under this invention, the acoustic information passes through in the case where being associated with the user library
The user library is specified to the identification of the acoustic information.
One embodiment under this invention, the decision-making module include an evaluation unit and an output unit, wherein described
Evaluation unit according to the pattern library so as to the acoustic information carry out matching value evaluation and quantized as a result, its
Described in output unit to the acoustic information carry out meaning on analysis and obtain the result of language.
Other side under this invention, the present invention provide a vehicle-mounted audio response method, including step:
A. a kind of decision mode is obtained;
B. an acoustic information is included;
C. according to decision mode, the acoustic information is handled;And
D. handling result is exported.
One embodiment under this invention includes further in step c:
C1. according to decision mode, a speech database is called;
C2. according to decision mode, the acoustic information and the speech database are matched;And
C3. matching result is corresponded to the output type of decision mode.
One embodiment under this invention includes further in step b:
B1. pre-identification processing is carried out to the acoustic information.
Description of the drawings
Fig. 1 is the scene signal of the vehicle-mounted audio response system and method according to a preferred embodiment of the present invention
Figure.
Fig. 2 is the evaluation signal of the vehicle-mounted audio response system and method for above preferred embodiment according to the present invention
Figure.
The flow chart of the vehicle-mounted audio response method of Fig. 3 above preferred embodiments according to the present invention.
Fig. 4 is the configuration diagram of the vehicle-mounted audio response system of above preferred embodiment according to the present invention.
Fig. 5 is a kind of application of the vehicle-mounted audio response system and method for above preferred embodiment according to the present invention
Schematic diagram.
Fig. 6 is the information flow of the vehicle-mounted audio response system and method for above preferred embodiment according to the present invention
Schematic diagram.
Specific implementation mode
It is described below for disclosing the present invention so that those skilled in the art can realize the present invention.It is excellent in being described below
Embodiment is selected to be only used as illustrating, it may occur to persons skilled in the art that other obvious modifications.It defines in the following description
The present invention basic principle can be applied to other embodiments, deformation scheme, improvement project, equivalent program and do not carry on the back
Other technologies scheme from the spirit and scope of the present invention.
It will be understood by those skilled in the art that the present invention exposure in, term " longitudinal direction ", " transverse direction ", "upper",
The orientation of the instructions such as "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom" "inner", "outside" or position are closed
System is to be based on the orientation or positional relationship shown in the drawings, and is merely for convenience of description of the present invention and simplification of the description, without referring to
Show or imply that signified device or element must have a particular orientation, with specific azimuth configuration and operation, therefore above-mentioned art
Language is not considered as limiting the invention.
It is understood that term " one " is interpreted as " at least one " or " one or more ", i.e., in one embodiment,
The quantity of one element can be one, and in a further embodiment, the quantity of the element can be multiple, and term " one " is no
It can be interpreted as the limitation to quantity.
It includes a radio reception component 10 and a speech database 30 that the present invention, which provides a vehicle-mounted audio response system, wherein described
Speech database 30 includes a pattern library 31 and a user library 32, wherein the pattern library is stored certain standard speech in advance
Data are sayed, wherein the user library is written into the acoustic information and the corresponding result of decision.The radio reception component 10 is further
Including an acquisition module 11 and a memory module 13, wherein the acquisition module 11 receives an acoustic information from vehicle, and deposit
It is stored in the memory module 13.The processor is communicatively coupled with the memory module.To the acoustic information into
After row processing and analysis, the vehicle-mounted audio response system will find the speech database to match with the acoustic information
Corresponding language meaning in 30, and then reacted for language meaning.In this way, utilizing vehicle by the acoustic information
It carries resource and obtains reaction of the vehicle to the acoustic information.It should be noted that reaction here is according to certain decision
Matching result obtained from evaluation.
The vehicle-mounted audio response system as shown in Figure 1 is passing through the radio reception component 10 and the speech database 30
It is middle to carry out matched application principle schematic diagram.The acoustic information that the vehicle-mounted audio response system receives includes tone color, sound
It adjusts, the characteristic element of content.According to the characteristic element of the acoustic information, corresponding different meanings.
Processor 20 in the vehicle-mounted audio response system by processing and divides the acoustic information of reception
Analysis, extracts the tone and meaning of the acoustic information.Information in the speech database 30 is carried out with the acoustic information
It compares, the processor confirms the correspondence meaning to match.According to different Decision-Making Evaluation requirements, the meaning of the acoustic information
The corresponding vehicle reaction is different.The corresponding tone of the acoustic information and meaning are respectively stored, and pass through storage
Personalized database can be formed by depositing and corresponding to user.
Preferably, after receiving the acoustic information, the vehicle-mounted audio response system will to the acoustic information into
Row processing.After the acoustic information is received, filtration treatment is first carried out, the acoustic filtering of environmental factor is fallen.This preferred implementation
In example, the acoustic information can be filtered by the method filtered by hardware filtering or software.Hardware filtering is preferably square
Method is recommended as installing damping device or denoising device in sound collection is equipped.The preferred method of software filtering is recommended as in sound
The frequency range of specific voice sounding is filtered out in wave signal.After filtering, feature extraction is done to the acoustic information, by the sound
Key element in message breath extracts and amplifies, for accurately comparison matching reference uses later.
After preliminary treatment, the key element of the acoustic information, i.e., tone color, tone, the content of the described acoustic information,
It is extracted and then is matched into line search in the speech database 30 by the system.The processor 20 extracts described
Acoustic information be selected from tone color, tone, content combination at least one information type.When the content according to the acoustic information
When matching and obtaining a matching value, the decision corresponding to the matching value is evaluated or is exported.Preferably, the sound
The tone color of information is matched in the speech database 30 into line search by the system, to distinguish coming for the acoustic information
Source.In this way, other sound sources are just effectively prevented, such as the misunderstanding that song, broadcast etc. are brought.Of course, it is possible to according to
The sound timbre of storage in the speech database 30 judges.The evaluation or output provided by vehicle, subsequently into one
Step ground improves the recognition efficiency to the acoustic information so that the vehicle-mounted audio response system has very strong adaptive learning
Ability.After feedback repeatedly, the speech database 30 has stronger adaptability and personal settings so that the vehicle
Carrying audio response system has very strong learning ability.
In more feasible applications, the acoustic information is further endowed to drive relevant operation, such as controls
The navigation information etc. of vehicle traveling or update introductory path processed, or with the unrelated behaviour to vehicle hardware of driving
Make, for example, play music, open vehicle window, unlocking car door etc..It is of course also possible to be multiple operations.Preferably, described vehicle-mounted
Audio response system carries out the acquisition of the acoustic information, such as microphone etc. using voice collection device.
More, it is preferable that the vehicle-mounted audio response method still further comprise the classification to the acoustic information and
Analysis.The radio reception component 10 further comprises a pre-identification module 12, wherein the pre-identification module 12 believes the sound
Breath carries out source judgement, that is to say, that recognizes the source that the acoustic information is sent out.It is noted that the pre-identification module
12 obtain the acoustic information from the memory module 13.Particularly, if be judged as come from voice, in particular according to institute
The content for stating acoustic information is told relevant with decision and evaluation, then needs to carry out further the acoustic information
Match, that is, hands over to the matching module 22.If being judged as coming from voice, told according to the content with the acoustic information
To be incoherent with traveling, then the acoustic information is recorded and the speech database 30.It is special if being judged as coming from vehicle sound
Be not according to and the acoustic information content tell it is relevant for vehicle-state, then need to the acoustic information carry out phase
The vehicle state analysis answered, then it is additionally carried out storage.In this way, vehicle-state can be recorded timely, be conducive to accident analysis
And diagnosis.Therefore, the system can not only ensure the safe and reliable of mobility operation, can also be used to the row that analysis understands vehicle
Sail situation and vehicle-state.
The processor 20 includes an identification module 21, a matching module 22 and a decision-making module 23, wherein the knowledge
The acoustic information that other module 21 is acquired the acquisition module 11 carries out the identification of two aspects of sound and meaning, wherein described
Matching module 22 generates the matching value by calling speech database 30 for the acoustic information, wherein described determine
Plan module 23 is evaluated according to the matching value of the matching module 22 or data.
The processor 20 includes further a locating module 25, wherein the locating module 25 is the matching module
22 provide a geography information.According to the geography information, matching range is reduced in the speech database 30.
The processor 20 includes further a setting interface 24, wherein from the setting interface 24 to the identification mould
Block 21, the matching module 22 and the decision-making module 23 are loaded into specific execution parameter.The identification module 21, described
Processing parameter with module 22 and the decision-making module 23 is set default value in advance.It can be with by the setting interface 24
The modification of parameter is carried out to the matching module 22 and the decision-making module 23 so that the accuracy and speed of processing meets user
Needs.
It should be noted that being shown preferably by image for the result of decision and the exhibition method of evaluation.In Fig. 2
It is shown, pass through console display, seat display or mobile terminal so that the handling result of the processor 20 obtains defeated
Go out.Result is transferred to display equipment by the decision-making module 23, and is required according to the display of distinct device, accordingly by result
Carry out the adjustment of data mode.
The vehicle-mounted audio response method is as shown in Figure 3 comprising step:
A. a kind of decision mode is obtained;
B. the acoustic information is included;
C. according to decision mode, the acoustic information is handled;And
D. handling result is exported.
More, include further in step c:
C1. according to decision mode, the speech database 30 is called;
C2. according to decision mode, the acoustic information and the voice data 30 are matched;And
C3. matching result is corresponded to the output type of decision mode.
More, include further in step b:
B1. pre-identification processing is carried out to the acoustic information.
It is further advanced by step b1 and first carries out filtration treatment, the acoustic filtering of environmental factor is fallen.The sound
Information can be filtered by the method filtered by hardware filtering or software.The preferred method of hardware filtering is recommended as in sound
Installing damping device or denoising device in acquisition equipment.The preferred method of software filtering is recommended as filtering out in acoustic signals
The frequency range of specific voice sounding.
The decision of the vehicle and evaluation procedure are illustrated for convenience, three kinds of decisions are utilized in this preferred embodiment
Mode carries out the explanation of process.The vehicle-mounted audio response system and method can practice English, study dialect and
It is operated in answer game etc. application, and then reaction of the vehicle to the acoustic information is provided.It should be noted that institute
The reaction for stating acoustic information can be the evaluation to the acoustic information, and the form for preferably turning to numerical value feeds back to user.
It is a kind of English practice or voice game application in, it is first determined for the acoustic information in this method
Decision mode.For example, this is determined as providing proficiency scoring to English equivalents.So, for the acoustic information included,
Pre-identification processing is first passed through, noise filtering is crossed.Then, because being English equivalents scoring, the speech database 30 is correspondingly called
Chinese and English pronunciation data library.The tone of the acoustic information and the semantic extraction distinguished, and with the institute of the voice data 30
Pattern library 31 is stated to be matched.In a kind of matching way, according to the semanteme of the acoustic information, the tune of standard is found,
Then the acoustic information is matched.It is defeated according to decision mode for the matching result of matching result, that is, matching value
Go out.For example, the result after matching has 89% matching value, then the decision-making module 23 provides 89 points of evaluation.In Fig. 1
Result is exported into middle control display.
More, the identification module 21 includes a sound recognition unit 211 and one meaning recognition unit 212, wherein the sound
Recognition unit 211 carries out the acoustic information feature extraction of tone, wherein the other unit 212 of consciousness believes the sound
Breath carries out semantic feature extraction.And the tone of the acoustic information is in the case where being associated with user library 32, by institute
It states the identification of acoustic information and specifies the user library 32.The meaning recognition unit 212 hands over result to the matching module 22,
So that can further be analyzed and processed.That is, not being that the acoustic information is passed completely through the identification module
21 understand, but the understanding to the meaning of the acoustic information is solved by the feature of part.In this way, knowing for the vehicle
Not and processing in process can be not respectively recognized.
The decision-making module 23 includes an evaluation unit 231 and an output unit 232, wherein the evaluation unit 231
The evaluation of matching value is carried out according to the pattern library 31 and then to the acoustic information, and quantized as a result, wherein described
Output unit 232 carries out the acoustic information analysis in meaning, and the result of language is obtained according to matching value.Namely
It says, the evaluation unit 231 judges according to certain standard, and the output unit 232 is to obtain language according to matching degree
On explanation.Preferably, result is carried out the displaying and output that image or voice are deleted by the output unit 232.
The output unit 232 further include a display output unit 2321 and a voice output unit 2322,
Described in display output unit 2321 convert matching result to textual visual information, wherein the voice output unit 2322
Convert matching result to audibilizations audible information.
A kind of feasible application scenarios are as shown in Figure 5 and Figure 6, and the memory module 13 of the radio reception component 10 is received
The acoustic information of record, the processor 20 are matched according to the speech database 30 and are obtained meaning interpretation, in turn
The acoustic information is translated as to other language or dialect.If the pre-identification module 12 is handled, according to place
The acoustic information after reason carries out subsequent operation.The vehicle is by the vehicle-mounted audio response system and method to institute
The data flow stated acoustic information and reacted is as shown in Figure 6.
Such as Fig. 5, after the acoustic information is received, after pre-identification, the identification module 21 extracts the sound letter
The tone and meaning of breath.Preferably, the tone of the acoustic information assists the matching module 22 in the speech database 30
In find corresponding data portion.That is, can be positioned described in the speech database 30 by the identification to tone
The partial content of pattern library 31, rather than whole databases is required for being matched.Alternatively, according to the sound of the acoustic information
Color corresponds to the user library 32.That is, the user library 32 does not need to be particularly specified, but say the user library 32
It is selected by the tone color of the acoustic information.In addition, the locating module 25 provides the geography information so that the matching
Module 22 further locks the part of the speech database 30.
The acoustic information is compared in the matching module 22 in the speech database 30, corresponding to find
Meaning, to obtain matching value.In this preferred embodiment, the matching module 22 by the meaning to the acoustic information, and
Obtain the specific language translation of the acoustic information.The result that the decision-making module 23 obtains the matching module 22 carries out defeated
Go out.Preferably, the display output interface 2321 of the output unit 232 exports spoken and written languages, and then obtains Fig. 5
In the effect translated by reaction of the vehicle to the acoustic information.
It is the acoustic information in Fig. 6 in the radio reception mechanism 10, the processor 20 and the speech database 30
The overall flow figure of middle processing.The speech database 30 is obtained first, passes through the pattern library of the speech database 30
31 and the user library 32 and to the acoustic information carry out adaptability matching.Then the acoustic information is received, particularly, is connect
It receives the acoustic information to carry out in real time, the method is preferably period run.Pre-identification is done to the acoustic information
Processing, this preferred embodiment using hardware and software filtering combine method, preferably by the content of the acoustic information, tone color and
Tone extraction is out for analysis and judgement later.Then, the acoustic information is according to the mode of decision and evaluation, obtains pair
The output result answered.The output unit 232 provides the output of picture or sound.For example, the acoustic information is given point
Several evaluations either gives corresponding text importing or gives corresponding translation word.
It is noted that the acoustic information is recorded in the speech database of the vehicle-mounted audio response method
30 the case where, is also very much.In particular, the acoustic information is not matched corresponding meaning, the acoustic information is preferred
Similar reaction is given according to matching value.When the acoustic information is not matched to respective sense, the acoustic information quilt
Not think it is that meaning is corresponding, and then is recorded in the speech database 30, but can be given by the decision-making module 23
Go out a result.The corresponding speech database 30, the sound are not matched to when distinguishing the acoustic information source
Information and analysis result are recorded in the speech database 30.When meaning corresponding to the acoustic information is confirmed again
It is not identified, the acoustic information is recorded in the speech database 30 of the vehicle-mounted audio response method.It is protecting
While demonstrate,proving the acoustic information and reliably trigger the action 30, by the analytic process and knot of no triggering acoustic information
Fruit is all fed back and is updated to the method by feedback.Improving the same of the stability of the vehicle-mounted audio response method
When, make the method have backup updating, adaptive learning ability so that in use, the method intelligence and
Executive capability is optimized.
It should be understood by those skilled in the art that the embodiment of the present invention shown in foregoing description and attached drawing is only used as illustrating
And it is not intended to limit the present invention.The purpose of the present invention has been fully and effectively achieved.The function and structural principle of the present invention exists
It shows and illustrates in embodiment, under without departing from the principle, embodiments of the present invention can have any deformation or modification.
Claims (20)
1. a vehicle-mounted audio response system, which is characterized in that including:
One radio reception component;
One processor;And
One speech database, wherein the radio reception component further comprises an acquisition module and a memory module, wherein described adopt
Collection module receives an acoustic information from vehicle, wherein the acoustic information is stored in the memory module, wherein the place
Reason device is communicatively coupled with the memory module, and then finds in the speech database to match with the acoustic information
Corresponding meaning, and the acoustic information is reflected according to decision mode.
2. according to the vehicle-mounted audio response system in claim 1, wherein the speech database include a pattern library and
One user library, wherein the pattern library is stored certain standard language data in advance, wherein the user library is written into institute
State acoustic information and the corresponding result of decision.
3. according to the vehicle-mounted audio response system in claim 2, wherein the processor extracts the acoustic information choosing
From tone color, tone, content combination at least one information type.
4. according to the vehicle-mounted audio response system in claim 2, wherein the information quilt that the acoustic information is extracted
The processor is matched in the speech database, and wherein matching result is the matching corresponding to the acoustic information
Value.
5. according to the vehicle-mounted audio response system in claim 2, wherein the matching value is by according to corresponding decision
It is evaluated or is exported.
6. according to the vehicle-mounted audio response system in claim 4, wherein the radio reception component further comprises a pre- knowledge
Other module, wherein the pre-identification module carries out source judgement to the acoustic information.
7. according to the vehicle-mounted audio response system in claim 6, if wherein the pre-identification module judges the sound
Information is then to need further to match the acoustic information from voice.
8. according to the vehicle-mounted audio response system in claim 4, wherein the processor include an identification module, one
With module and a decision-making module, wherein the acoustic information that the identification module is acquired the acquisition module carries out sound
With the identification of meaning, wherein the matching module is that the acoustic information generates the matching by calling the speech database
Value, wherein the decision-making module obtains the result of decision according to the matching value of the matching module.
9. according to the vehicle-mounted audio response system in claim 8, wherein the processor includes further a positioning
Module, wherein the locating module provides a geography information for the matching module, wherein according to the geography information, to being made
Matching range is reduced in the speech database.
10. according to the vehicle-mounted audio response system in claim 8, wherein the processor includes further a setting
Interface, wherein being loaded into from the setting interface to the identification module, the matching module and the decision-making module specific
Execute parameter.
11. according to the vehicle-mounted audio response system in claim 8, wherein the identification module, the matching module with
And the processing parameter of the decision-making module is set default value in advance, wherein by the setting interface to the matching module
The modification of parameter is carried out with the decision-making module.
12. according to the vehicle-mounted audio response system in claim 8, wherein by the setting interface to the matching mould
The accuracy and speed of the processing of block and the decision-making module is modified.
13. according to the vehicle-mounted audio response system in claim 8, wherein the exhibition method of the result of decision is to pass through image
And it shows.
14. according to the vehicle-mounted audio response system in claim 13, wherein decision is by being selected from console display, seat
One or several kinds in the combination of chair display or mobile terminal and show.
15. according to the vehicle-mounted audio response system in claim 8, wherein the identification module includes a sound recognition unit
With one meaning recognition unit, wherein the sound recognition unit to the acoustic information carry out tone feature extraction, wherein the meaning
Recognition unit carries out the acoustic information semantic feature extraction.
16. according to the vehicle-mounted audio response system in claim 15, wherein the tone of the acoustic information is in association institute
In the case of stating user library, the user library is specified by the identification to the acoustic information.
17. according to the vehicle-mounted audio response system in claim 8, wherein the decision-making module include an evaluation unit and
One output unit, wherein the evaluation unit according to the pattern library so as to the acoustic information carry out matching value evaluation and
It is being quantized as a result, the wherein described output unit to the acoustic information carry out meaning on analysis and obtain language
As a result.
18. a vehicle-mounted audio response method, which is characterized in that including step:
A. a kind of decision mode is obtained;
B. an acoustic information is included;
C. according to decision mode, the acoustic information is handled;And
D. handling result is exported.
19. according to the vehicle-mounted audio response method in claim 18, include further in step c:
C1. according to decision mode, a speech database is called;
C2. according to decision mode, the acoustic information and the speech database are matched;And
C3. matching result is corresponded to the output type of decision mode.
20. according to the vehicle-mounted audio response method in claim 18, include further in step b:B1. to the sound
Message breath carries out pre-identification processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810295194.0A CN108665893A (en) | 2018-03-30 | 2018-03-30 | Vehicle-mounted audio response system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810295194.0A CN108665893A (en) | 2018-03-30 | 2018-03-30 | Vehicle-mounted audio response system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108665893A true CN108665893A (en) | 2018-10-16 |
Family
ID=63783075
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810295194.0A Pending CN108665893A (en) | 2018-03-30 | 2018-03-30 | Vehicle-mounted audio response system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108665893A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070870B (en) * | 2019-05-06 | 2022-02-08 | 阿波罗智联(北京)科技有限公司 | Signal processing method and system of vehicle-mounted system |
CN114379468A (en) * | 2021-12-28 | 2022-04-22 | 东风柳州汽车有限公司 | Vehicle sound adaptive adjustment method, device, equipment and storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005003747A (en) * | 2003-06-09 | 2005-01-06 | Cai Media Kyodo Kaihatsu:Kk | Interactive robot and dialogue system |
CN103187051A (en) * | 2011-12-28 | 2013-07-03 | 上海博泰悦臻电子设备制造有限公司 | Vehicle-mounted interaction device |
CN103714727A (en) * | 2012-10-06 | 2014-04-09 | 南京大五教育科技有限公司 | Man-machine interaction-based foreign language learning system and method thereof |
JP2015089697A (en) * | 2013-11-05 | 2015-05-11 | トヨタ自動車株式会社 | Vehicular voice recognition apparatus |
US20150170653A1 (en) * | 2013-12-18 | 2015-06-18 | Harman International Industries, Incorporated | Voice recognition query response system |
CN105551328A (en) * | 2016-01-28 | 2016-05-04 | 北京聚力互信教育科技有限公司 | Language teaching coaching and study synchronization integration system on the basis of mobile interaction and big data analysis |
US20160240189A1 (en) * | 2015-02-16 | 2016-08-18 | Hyundai Motor Company | Vehicle and method of controlling the same |
CN106057194A (en) * | 2016-06-25 | 2016-10-26 | 浙江合众新能源汽车有限公司 | Voice interaction system |
CN106662918A (en) * | 2014-07-04 | 2017-05-10 | 歌乐株式会社 | In-vehicle interactive system and in-vehicle information appliance |
CN106828372A (en) * | 2017-01-22 | 2017-06-13 | 斑马信息科技有限公司 | Vehicle-mounted voice control system and method |
CN107221318A (en) * | 2017-05-12 | 2017-09-29 | 广东外语外贸大学 | Oral English Practice pronunciation methods of marking and system |
CN107316643A (en) * | 2017-07-04 | 2017-11-03 | 科大讯飞股份有限公司 | Voice interactive method and device |
CN107329996A (en) * | 2017-06-08 | 2017-11-07 | 三峡大学 | A kind of chat robots system and chat method based on fuzzy neural network |
CN107424611A (en) * | 2017-07-07 | 2017-12-01 | 歌尔科技有限公司 | Voice interactive method and device |
CN206907249U (en) * | 2017-05-22 | 2018-01-19 | 湖南中科优信科技有限公司 | Dialect learning machine based on speech recognition |
-
2018
- 2018-03-30 CN CN201810295194.0A patent/CN108665893A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005003747A (en) * | 2003-06-09 | 2005-01-06 | Cai Media Kyodo Kaihatsu:Kk | Interactive robot and dialogue system |
CN103187051A (en) * | 2011-12-28 | 2013-07-03 | 上海博泰悦臻电子设备制造有限公司 | Vehicle-mounted interaction device |
CN103714727A (en) * | 2012-10-06 | 2014-04-09 | 南京大五教育科技有限公司 | Man-machine interaction-based foreign language learning system and method thereof |
JP2015089697A (en) * | 2013-11-05 | 2015-05-11 | トヨタ自動車株式会社 | Vehicular voice recognition apparatus |
US20150170653A1 (en) * | 2013-12-18 | 2015-06-18 | Harman International Industries, Incorporated | Voice recognition query response system |
CN106662918A (en) * | 2014-07-04 | 2017-05-10 | 歌乐株式会社 | In-vehicle interactive system and in-vehicle information appliance |
US20160240189A1 (en) * | 2015-02-16 | 2016-08-18 | Hyundai Motor Company | Vehicle and method of controlling the same |
CN105551328A (en) * | 2016-01-28 | 2016-05-04 | 北京聚力互信教育科技有限公司 | Language teaching coaching and study synchronization integration system on the basis of mobile interaction and big data analysis |
CN106057194A (en) * | 2016-06-25 | 2016-10-26 | 浙江合众新能源汽车有限公司 | Voice interaction system |
CN106828372A (en) * | 2017-01-22 | 2017-06-13 | 斑马信息科技有限公司 | Vehicle-mounted voice control system and method |
CN107221318A (en) * | 2017-05-12 | 2017-09-29 | 广东外语外贸大学 | Oral English Practice pronunciation methods of marking and system |
CN206907249U (en) * | 2017-05-22 | 2018-01-19 | 湖南中科优信科技有限公司 | Dialect learning machine based on speech recognition |
CN107329996A (en) * | 2017-06-08 | 2017-11-07 | 三峡大学 | A kind of chat robots system and chat method based on fuzzy neural network |
CN107316643A (en) * | 2017-07-04 | 2017-11-03 | 科大讯飞股份有限公司 | Voice interactive method and device |
CN107424611A (en) * | 2017-07-07 | 2017-12-01 | 歌尔科技有限公司 | Voice interactive method and device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070870B (en) * | 2019-05-06 | 2022-02-08 | 阿波罗智联(北京)科技有限公司 | Signal processing method and system of vehicle-mounted system |
CN114379468A (en) * | 2021-12-28 | 2022-04-22 | 东风柳州汽车有限公司 | Vehicle sound adaptive adjustment method, device, equipment and storage medium |
CN114379468B (en) * | 2021-12-28 | 2023-06-06 | 东风柳州汽车有限公司 | Vehicle sound self-adaptive adjusting method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108806667B (en) | Synchronous recognition method of voice and emotion based on neural network | |
US9881616B2 (en) | Method and systems having improved speech recognition | |
DE19533541C1 (en) | Method for the automatic control of one or more devices by voice commands or by voice dialog in real time and device for executing the method | |
CN102097096B (en) | Using pitch during speech recognition post-processing to improve recognition accuracy | |
DE102012217160B4 (en) | Procedures for correcting unintelligible synthetic speech | |
US9564120B2 (en) | Speech adaptation in speech synthesis | |
CN102324035A (en) | Method and system of applying lip posture assisted speech recognition technique to vehicle navigation | |
CN102693725A (en) | Speech recognition dependent on text message content | |
CN108242236A (en) | Dialog process device and its vehicle and dialog process method | |
DE102010034433B4 (en) | Method of recognizing speech | |
CN109941231B (en) | Vehicle-mounted terminal equipment, vehicle-mounted interaction system and interaction method | |
DE102019107624A1 (en) | System and method for fulfilling a voice request | |
CN110047502A (en) | The recognition methods of hierarchical voice de-noising and system under noise circumstance | |
US20040199389A1 (en) | Method and device for recognising a phonetic sound sequence or character sequence | |
US11676572B2 (en) | Instantaneous learning in text-to-speech during dialog | |
CN101226742A (en) | Method for recognizing sound-groove based on affection compensation | |
CN110027409B (en) | Vehicle control device, vehicle control method, and computer-readable recording medium | |
Maheswari et al. | A hybrid model of neural network approach for speaker independent word recognition | |
CN110232924A (en) | Vehicle-mounted voice management method, device, vehicle and storage medium | |
CN108665893A (en) | Vehicle-mounted audio response system and method | |
Robert-Ribes et al. | Exploiting sensor fusion architectures and stimuli complementarity in AV speech recognition | |
CN107818783A (en) | A kind of mutual method and device of man-machine multi-modal on-vehicle safety sexual intercourse based on vocal print technology | |
CN202329640U (en) | System for applying auxiliary voice recognition technology by mouth shape in vehicular navigation | |
DE112021000292T5 (en) | VOICE PROCESSING SYSTEM | |
CN106828372A (en) | Vehicle-mounted voice control system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181016 |
|
RJ01 | Rejection of invention patent application after publication |