CN114637487A - Vehicle voice interaction searching method and device, vehicle and storage medium - Google Patents

Vehicle voice interaction searching method and device, vehicle and storage medium Download PDF

Info

Publication number
CN114637487A
CN114637487A CN202210238276.8A CN202210238276A CN114637487A CN 114637487 A CN114637487 A CN 114637487A CN 202210238276 A CN202210238276 A CN 202210238276A CN 114637487 A CN114637487 A CN 114637487A
Authority
CN
China
Prior art keywords
vehicle
voice
user
instruction
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210238276.8A
Other languages
Chinese (zh)
Inventor
樊倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery Automobile Co Ltd
Wuhu Lion Automotive Technologies Co Ltd
Original Assignee
Chery Automobile Co Ltd
Wuhu Lion Automotive Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd, Wuhu Lion Automotive Technologies Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN202210238276.8A priority Critical patent/CN114637487A/en
Publication of CN114637487A publication Critical patent/CN114637487A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3608Destination input or retrieval using speech input, e.g. using speech recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3322Query formulation using system suggestions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9532Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Navigation (AREA)

Abstract

The application discloses a vehicle voice interaction searching method, a vehicle voice interaction searching device, a vehicle and a storage medium, wherein the method comprises the following steps: detecting whether the vehicle enters a voice searching working condition or not; after the condition that the vehicle enters the voice search is detected, controlling a sound receiving device of the vehicle to start receiving sound, identifying at least one word feature in a sound receiving instruction, and simultaneously displaying associated information corresponding to the at least one word feature; and acquiring a voice instruction determined by the association information, and controlling the vehicle to execute the interactive action according to the voice instruction. Therefore, the problems that the intention of a user is difficult to quickly and accurately identify in the related technology, and a complete instruction can be obtained only through multiple times of input of the user, so that the driving operation of the user is deformed, even a traffic accident occurs, the driving safety of a vehicle is reduced, the interaction mode is not intelligent enough, and the using requirement of the user cannot be met are solved.

Description

Vehicle voice interaction searching method and device, vehicle and storage medium
Technical Field
The present application relates to the field of vehicle interaction technologies, and in particular, to a method and an apparatus for searching for vehicle voice interaction, a vehicle, and a storage medium.
Background
In order to improve the intelligent level of a vehicle, convenient interaction modes such as voice and gestures are generally realized in a vehicle using scene of a user, and the voice operation is more convenient and safer compared with the touch screen operation. New interaction modes such as voice and gestures require certain learning cost and learning time, so that the user can conveniently and rapidly operate the system.
When a user explores an interaction mode of learning voice in an early stage, under a complex voice scene, voice interaction is difficult to recognize user intentions through simple voice instructions of the user, the user needs to perform language organization and perform a large number of voice statements, and the problem of difficult recognition in the early stage of voice interaction can be solved by adopting a mode of combining voice and a screen.
However, in the related art, the mode of combining the voice and the screen is usually that after the voice input is finished, the screen displays the voice input result, and the user needs to supplement the input result again to completely recognize the user intention, so that the user is distracted in the vehicle driving process, the driving operation is deformed, even a traffic accident occurs, the driving safety of the vehicle is reduced, the interactive mode is not intelligent enough, the use requirements of the user cannot be met, and improvement is urgently needed.
Content of application
The application provides a vehicle voice interaction searching method, a vehicle voice interaction searching device, a vehicle and a storage medium, and aims to solve the problems that in the related technology, the intention of a user is difficult to quickly and accurately identify, a complete instruction can be acquired only through multiple times of input of the user, and the user is prone to distract in the process of using voice interaction, and further traffic accidents are caused.
An embodiment of a first aspect of the present application provides a vehicle voice interaction search method, including the following steps: detecting whether the vehicle enters a voice searching working condition or not; after the condition that the vehicle enters the voice search working condition is detected, controlling a sound receiving device of the vehicle to start receiving sound, identifying at least one word feature in a sound receiving instruction, and simultaneously displaying associated information corresponding to the at least one word feature; and acquiring a voice instruction determined by the association information, and controlling the vehicle to execute an interactive action according to the voice instruction.
Optionally, in an embodiment of the present application, the detecting whether the vehicle enters the voice search condition includes: receiving a voice awakening instruction of a user; or triggering a voice wake-up key of the vehicle.
Optionally, in an embodiment of the present application, the displaying the associated information corresponding to the at least one word feature includes: and inputting the at least one word feature into a pre-trained association model to obtain the association information.
Optionally, in an embodiment of the present application, after controlling the sound reception device of the vehicle to start sound reception, the method further includes: and controlling at least one acoustic reminding device and/or at least one optical reminding device of the vehicle to remind the radio reception state.
Optionally, in an embodiment of the present application, the displaying the associated information corresponding to the at least one word feature includes: determining the weight of each word segment in the associated information; and arranging and displaying from top to bottom according to the weight of each word segment.
The embodiment of the second aspect of the present application provides a vehicle voice interaction searching apparatus, including: the detection module is used for detecting whether the vehicle enters a voice search working condition or not; the association module is used for controlling a sound receiving device of the vehicle to start receiving sound after the voice search working condition is detected to enter, identifying at least one word feature in a sound receiving instruction and displaying association information corresponding to the at least one word feature; and the interaction module is used for acquiring the voice instruction determined by the association information and controlling the vehicle to execute the interaction action according to the voice instruction.
Optionally, in an embodiment of the present application, the detection module includes: the receiving unit is used for receiving a voice awakening instruction of a user; and the triggering unit is used for triggering the voice awakening key of the vehicle.
Optionally, in an embodiment of the present application, the association module is further configured to input the at least one word feature into a pre-trained association model to obtain the association information.
Optionally, in an embodiment of the present application, the association module further includes: and the reminding unit is used for controlling at least one acoustic reminding device and/or at least one optical reminding device of the vehicle to carry out radio reception state reminding.
Optionally, in an embodiment of the present application, the association module further includes: the weight determining unit is used for determining the weight of each word segment in the associated information; and the sequencing unit is used for carrying out sequencing display from top to bottom according to the weight of each word segment.
An embodiment of a third aspect of the present application provides a vehicle, comprising: the vehicle voice interaction searching method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the vehicle voice interaction searching method according to the embodiment.
A fourth aspect of the present application provides a computer-readable storage medium, which stores computer instructions for causing the computer to execute the vehicle voice interaction searching method according to the above embodiment.
According to the embodiment of the application, after the voice search working condition is detected to be entered, the word characteristics in the received user voice instruction can be extracted, association is carried out according to the word characteristics, the user intention is further determined, corresponding interaction is achieved, the learning time and the learning cost in the initial voice interaction stage can be saved, the user does not need to organize the language to carry out long sentence description, accurate obtaining of the user intention can be achieved through short sentences, the driving experience of the user is improved, and the situation that the user scatters attention in the voice interaction process and traffic accidents are caused is avoided. Therefore, the problems that in the related art, the intention of a user is difficult to rapidly and accurately identify, and a complete instruction can be acquired only through multiple times of input of the user are solved, so that the driving operation of the user is deformed, even a traffic accident occurs, the driving safety of a vehicle is reduced, the interaction mode is not intelligent enough, and the use requirement of the user cannot be met.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flowchart of a searching method for vehicle voice interaction according to an embodiment of the present application;
FIG. 2 is a schematic diagram of associative feedback of a search method of vehicle voice interaction according to one embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a wake-up method of a vehicle voice interaction search method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a radio reception status feedback of a search method of vehicle voice interaction according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an implementation result of a search method for vehicle voice interaction according to an embodiment of the application;
FIG. 6 is a flow diagram of a search method of vehicle voice interaction according to one embodiment of the present application;
FIG. 7 is a schematic structural diagram of a vehicle voice interaction searching device according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes a search method, a search device, a vehicle, and a storage medium for vehicle voice interaction according to embodiments of the present application with reference to the drawings. Aiming at the problems that the intention of a user is difficult to quickly and accurately identify in the related technology mentioned in the background technology center and the complete instruction can be obtained only by the repeated input of the user, so that the driving operation of the user is deformed, even a traffic accident occurs, the driving safety of a vehicle is reduced, the interactive mode is not intelligent enough and the use requirement of the user cannot be met, the method provides a search method of vehicle voice interaction, in the method, the embodiment of the application can extract word characteristics in the received voice instruction of the user after detecting that the vehicle enters a voice search working condition, and associates according to the word characteristics to further determine the intention of the user, so as to realize corresponding interactive action, save the learning time and the learning cost at the initial stage of the voice interaction, and can realize the accurate acquisition of the intention of the user by short sentences without organizing language to describe in a long sentence manner, therefore, the driving experience of the user is improved, and the traffic accidents caused by the fact that the user scatters attention in the voice interaction process are avoided. Therefore, the problems that in the related art, the intention of a user is difficult to rapidly and accurately identify, and a complete instruction can be acquired only through multiple times of input of the user are solved, so that the driving operation of the user is deformed, even a traffic accident occurs, the driving safety of a vehicle is reduced, the interaction mode is not intelligent enough, and the use requirement of the user cannot be met.
Specifically, fig. 1 is a schematic flowchart of a vehicle voice interaction search method according to an embodiment of the present application.
As shown in fig. 1, the vehicle voice interactive search method includes the following steps:
in step S101, it is detected whether the vehicle enters a voice search mode.
In the actual execution process, the embodiment of the application can detect whether the vehicle enters the voice search working condition, and when the condition that the vehicle does not enter the voice search working condition is detected, the embodiment of the application responds to normal speaking communication of a user so as to cause misjudgment, improve the accuracy and the practicability of interaction and meet the use requirement.
Optionally, in an embodiment of the present application, detecting whether the vehicle enters the voice search condition includes: receiving a voice awakening instruction of a user; or triggering a voice wake-up key of the vehicle.
Specifically, as a possible implementation manner, the voice wake-up instruction may be a specific keyword wake-up instruction, and after the user speaks the specific keyword, the embodiment of the present application enters a voice search working condition, where the specific keyword may use a factory setting word, and may also be set correspondingly according to a user requirement.
As another possible implementation mode, the voice wake-up key of the vehicle can be a touch screen switch on a touch screen of the vehicle, and can also be a physical switch of a center console in the vehicle.
In addition, this application embodiment can also carry out opening and closing of vehicle voice interaction function through the user's that is correlated with the vehicle mobile terminal, if the driver sends opening signal or close signal to vehicle terminal through the cell-phone, and then control the vehicle and get into or withdraw from the pronunciation search operating mode, when more intelligent convenient, improve the interaction function of vehicle, promote to use and experience, guarantee customer's viscosity.
In step S102, after the condition of entering the voice search is detected, the sound reception device of the vehicle is controlled to start sound reception, and at least one word feature in the sound reception instruction is identified, and simultaneously, associated information corresponding to the at least one word feature is displayed.
It can be understood that, after the embodiment of the present application detects that the vehicle enters the voice search condition, a sound reception device of the vehicle, such as a microphone, may be controlled to receive sound from the user, and further identify at least one word feature from the obtained sound reception instruction, and display associated information corresponding to the identified word feature, where the sound reception instruction obtained by the sound reception of the embodiment of the present application may be a word or a short sentence input by the voice of the user, and is determined by a sound reception duration or a sound reception frequency, and is not particularly limited herein.
Optionally, in an embodiment of the present application, after controlling the sound reception device of the vehicle to start sound reception, the method further includes: and controlling at least one acoustic reminding device and/or at least one optical reminding device of the vehicle to remind the radio reception state.
As a possible implementation manner, after the sound reception device of the vehicle is controlled to start sound reception, the embodiment of the application can control at least one acoustic reminding device and/or at least one optical reminding device of the vehicle to remind the user of the sound reception state, so that the user can be more intuitively reminded that the user is currently in the sound reception state, and meanwhile, the user can be reminded by turning on a voice search mode in a manner of mistaken touch and the like, so that the influence on the normal driving state of the user is avoided.
In addition, the acoustic reminding device can also play the input information of the user after the voice input of the user is finished; the optical reminding device can display the input information of the user in a text form in the display screen after the voice input of the user is finished, so that the user can conveniently confirm whether the current radio instruction is accurate, and once the current radio instruction is wrong, the voice input can be carried out again in time.
It should be noted that, according to different vehicle models, the number and the representation of the acoustic reminding device and the optical reminding device may be set by those skilled in the art, and are not limited herein.
Optionally, in an embodiment of the present application, displaying associated information corresponding to at least one word feature includes: and inputting at least one word characteristic into a pre-trained association model to obtain association information.
Further, the embodiment of the application may train an association model in advance, and associate at least one recognized word feature through the pre-trained association model, thereby obtaining association information.
For example, as shown in fig. 2, taking navigation as an example, when a user speaks a "go long" instruction under the voice search condition of the embodiment of the present application, the embodiment of the present application may perform radio reception on the radio reception instruction, extract word features "go" and "long", associate with the word features, and provide: "go Changchun road No. 8", "go Changcan Men" and other association information.
Optionally, in an embodiment of the present application, displaying associated information corresponding to at least one word feature includes: determining the weight of each word segment in the association information; the words are ranked and displayed from top to bottom according to the weight of each word segment.
In some embodiments, several pieces of association information may correspond to the same word feature, and the embodiment of the present application may determine the weight of each word segment in the association information, and display the association information in an arrangement according to the weight. The weight of each word segment can be determined according to the search frequency of the word segment in the big data, and can also be determined by combining the search frequency of the user, and the association information is sequenced according to the weight of each word segment in the association information, so that the user can accurately find the real instruction intention from the sequencing, the operation time of the user is saved, and a more convenient voice search mode is provided for the user.
In step S103, a voice instruction determined by the association information is acquired, and the vehicle is controlled to perform an interactive action according to the voice instruction.
In the actual execution process, a user can select a voice instruction meeting the intention from the association information, wherein the user can select the association information in a touch screen mode and also select the association information in a voice input mode, for example, the voice input mode is 'second', namely, the second of the association information is selected as the instruction intention, so that the embodiment of the application can acquire the accurate voice instruction, further give an instruction to the functional equipment in the vehicle, and achieve the purpose of voice interaction of the user.
The embodiments of the present application will be described in detail with reference to fig. 2 to 6.
As shown in fig. 6, taking navigation as an example, the embodiment of the present application includes the following steps:
step S601: and entering a voice searching working condition. In the actual execution process, the embodiment of the application can detect whether the vehicle enters the voice search working condition, and when the condition that the vehicle does not enter the voice search working condition is detected, in order to avoid the embodiment of the application from responding to normal speaking communication of a user and further causing misjudgment, the embodiment of the application does not need to perform subsequent voice search steps.
Specifically, the voice wake-up instruction may wake up for a specific keyword, for example: "XX, hello", after the user speaks the specific keyword, the embodiment of the present application enters the voice search working condition, where the specific keyword may use a factory setting word, and may also be set correspondingly according to the user requirement.
As shown in fig. 3, the voice wake-up key of the vehicle may be a touch screen switch on a touch screen of the vehicle, or a physical switch of a center console in the vehicle, and after the user triggers the voice wake-up key, the embodiment of the present application enters a voice search condition.
In addition, the embodiment of the application can also be used for starting and closing the vehicle voice interaction function through the mobile terminal of the user associated with the vehicle.
Step S602: and starting to receive the sound. It can be understood that, after the embodiment of the application detects that the vehicle enters the voice search condition, a sound receiving device of the vehicle, such as a microphone, can be controlled to receive sound for the user.
It should be noted that, according to different vehicle models, the number and the representation of the acoustic reminding device and the optical reminding device may be set by those skilled in the art, and are not limited herein.
Step S603: and feeding back a radio receiving instruction and associating the instruction content. As shown in fig. 4, after the sound reception device of the vehicle is controlled to start receiving sound, the embodiment of the application can control at least one acoustic reminding device and/or at least one optical reminding device of the vehicle to remind the user of the sound reception state, so that the user can be more intuitively reminded that the user is currently in the sound reception state, and meanwhile, the user can be reminded by turning on the voice search mode in a way of mistaken touch and the like, so that the influence on the normal driving state of the user is avoided.
In addition, the acoustic reminding device can also play the input information of the user after the voice input of the user is finished; the optical reminding device can display the input information of the user in a text form in the display screen after the voice input of the user is finished, so that the user can conveniently confirm whether the reception instruction of the embodiment of the application is accurate.
According to the embodiment of the application, at least one word feature can be recognized from the obtained radio reception instruction, and the associated information corresponding to the recognized word feature is displayed, wherein the radio reception instruction obtained through the radio reception of the embodiment of the application can be a short sentence input by the voice of the user.
In some embodiments, several pieces of association information may correspond to the same word feature, and the embodiment of the present application may determine the weight of each word segment in the association information, and display the association information in an arrangement according to the weight. The weight of each word segment can be determined according to the search frequency of the word segment in the big data, and can also be determined by combining the search frequency of the user, and the association information is sequenced according to the weight of each word segment in the association information, so that the user can find the real instruction intention of the user at the arranged head position, the operation time of the user is saved, and a more convenient voice search mode is provided for the user.
As shown in fig. 2, taking navigation as an example, when a user speaks a "go long" instruction under the voice search condition of the embodiment of the present application, the embodiment of the present application may perform radio reception on the radio reception instruction, extract word features "go" and "long", associate with the word features, and provide: the association information such as 'Changchun road No. 8', 'Changchang Gate' and the like is sorted according to the weight of the association information, and is displayed as follows from top to bottom: "go Changchun road No. 8", "go Chang' an Men" and "go Changting street".
Step S604: and receiving a new instruction issued by the user according to the association information. The user can make a next indication for the associated information obtained in the above steps in a touch screen or voice mode, for example, when the voice instruction of the user is: "second. "while, embodiments of the present application may recognize the current instruction as the second in the associative information ordering, i.e.," remove the Chang Anmen ".
Step S605: and executing the instruction content. The embodiment of the present application may receive and execute the instruction content, for example, as shown in fig. 5, when the instruction content is "go to the security gate", the embodiment of the present application may call the electronic map to provide the user with real-time navigation to the "security gate".
According to the vehicle voice interaction search method and device, after the condition that the vehicle enters the voice search working condition is detected, word features in the received user voice command can be extracted, association is carried out according to the word features, the user intention is further determined, corresponding interaction actions are achieved, learning time and learning cost at the initial stage of voice interaction can be saved, long sentence description is not needed when a user organizes a language, accurate acquisition of the user intention can be achieved through short sentences, driving experience of the user is improved, and the situation that the user is distracted in the voice interaction process and traffic accidents are caused is avoided. Therefore, the problems that in the related art, the intention of a user is difficult to rapidly and accurately identify, and a complete instruction can be acquired only through multiple times of input of the user are solved, so that the driving operation of the user is deformed, even a traffic accident occurs, the driving safety of a vehicle is reduced, the interaction mode is not intelligent enough, and the use requirement of the user cannot be met.
The following describes a vehicle voice interactive search device according to an embodiment of the present application with reference to the drawings.
FIG. 7 is a block diagram of a vehicle voice interactive search apparatus according to an embodiment of the present application.
As shown in fig. 7, the vehicle voice interactive search apparatus 10 includes: a detection module 100, a association module 200 and an interaction module 300.
Specifically, the detecting module 100 is configured to detect whether the vehicle enters a voice search condition.
The association module 200 is configured to control a sound receiving device of the vehicle to start receiving sound after detecting that the vehicle enters the voice search working condition, identify at least one word feature in the sound receiving instruction, and display association information corresponding to the at least one word feature.
And the interaction module 300 is used for acquiring the voice instruction determined by the association information and controlling the vehicle to execute the interaction action according to the voice instruction.
Optionally, in an embodiment of the present application, the detection module 100 includes: the device comprises a receiving unit and a triggering unit.
The receiving unit is used for receiving a voice awakening instruction of a user.
And the triggering unit is used for triggering a voice awakening key of the vehicle.
Optionally, in an embodiment of the present application, the association module 200 is further configured to input at least one word feature into a pre-trained association model to obtain association information.
Optionally, in an embodiment of the present application, the association module 200 further includes: and a reminding unit.
The reminding unit is used for controlling at least one acoustic reminding device and/or at least one optical reminding device of the vehicle to remind the radio reception state.
Optionally, in an embodiment of the present application, the association module 200 further includes: a weight determination unit and a sorting unit.
The weight determining unit is used for determining the weight of each word segment in the association information.
And the sorting unit is used for sorting and displaying from top to bottom according to the weight of each word segment.
It should be noted that the explanation of the embodiment of the vehicle voice interaction search method is also applicable to the vehicle voice interaction search apparatus of the embodiment, and details are not repeated here.
According to the vehicle voice interaction searching device provided by the embodiment of the application, after the condition that the vehicle enters a voice searching condition is detected, word features in a received user voice command can be extracted, association is carried out according to the word features, the user intention is further determined, corresponding interaction actions are realized, the learning time and the learning cost at the initial stage of voice interaction can be saved, long sentence description is not needed to be carried out by organizing a language by a user, accurate acquisition of the user intention can be realized through short sentences, the driving experience of the user is improved, and the condition that the user scatters attention in the voice interaction process and traffic accidents are caused is avoided. Therefore, the problems that in the related technology, the intention of the user is difficult to quickly and accurately identify, and the complete instruction can be acquired only through multiple times of input of the user, so that the user is prone to distracting in the process of using voice interaction, and further traffic accidents are caused are solved.
Fig. 8 is a schematic structural diagram of a vehicle according to an embodiment of the present application. The vehicle may include:
a memory 801, a processor 802, and a computer program stored on the memory 801 and executable on the processor 802.
The processor 802, when executing the program, implements the vehicle voice interactive search method provided in the above-described embodiments.
Further, the vehicle further includes:
a communication interface 803 for communicating between the memory 801 and the processor 802.
A memory 801 for storing computer programs operable on the processor 802.
The memory 801 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 801, the processor 802 and the communication interface 803 are implemented independently, the communication interface 803, the memory 801 and the processor 802 may be connected to each other via a bus and communicate with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
Alternatively, in practical implementation, if the memory 801, the processor 802 and the communication interface 803 are integrated into one chip, the memory 801, the processor 802 and the communication interface 803 may communicate with each other through an internal interface.
The processor 802 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the search method for vehicle voice interaction as above.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "N" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more N executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of implementing the embodiments of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A vehicle voice interaction searching method is characterized by comprising the following steps:
detecting whether the vehicle enters a voice searching working condition or not;
after the condition that the vehicle enters the voice search working condition is detected, controlling a sound receiving device of the vehicle to start receiving sound, identifying at least one word feature in a sound receiving instruction, and simultaneously displaying associated information corresponding to the at least one word feature; and
and acquiring a voice instruction determined by the association information, and controlling the vehicle to execute an interactive action according to the voice instruction.
2. The method of claim 1, wherein the detecting whether the vehicle enters a voice search condition comprises:
receiving a voice awakening instruction of a user;
or triggering a voice wake-up key of the vehicle.
3. The method according to claim 1, wherein the displaying the associated information corresponding to the at least one word feature comprises:
and inputting the at least one word feature into a pre-trained association model to obtain the association information.
4. The method of claim 1, after controlling a sound reception device of the vehicle to start sound reception, further comprising:
and controlling at least one acoustic reminding device and/or at least one optical reminding device of the vehicle to remind the radio reception state.
5. The method according to claim 1, wherein the displaying the associated information corresponding to the at least one word feature comprises:
determining the weight of each word segment in the associated information;
and arranging and displaying from top to bottom according to the weight of each word segment.
6. A vehicle voice interactive search apparatus, comprising:
the detection module is used for detecting whether the vehicle enters a voice search working condition or not;
the association module is used for controlling a sound receiving device of the vehicle to start receiving sound after the voice search working condition is detected to enter, identifying at least one word feature in a sound receiving instruction and displaying association information corresponding to the at least one word feature; and
and the interaction module is used for acquiring the voice instruction determined by the association information and controlling the vehicle to execute the interaction action according to the voice instruction.
7. The apparatus of claim 6, wherein the association module is further configured to input the at least one word feature into a pre-trained association model to obtain the association information.
8. The apparatus of claim 6, wherein the association module further comprises:
the weight determining unit is used for determining the weight of each word segment in the associated information;
and the sequencing unit is used for carrying out sequencing display from top to bottom according to the weight of each word segment.
9. A vehicle, characterized by comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor executing the program to implement the vehicle voice interactive search method according to any one of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor for implementing a vehicle voice interactive search method according to any one of claims 1 to 5.
CN202210238276.8A 2022-03-11 2022-03-11 Vehicle voice interaction searching method and device, vehicle and storage medium Pending CN114637487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210238276.8A CN114637487A (en) 2022-03-11 2022-03-11 Vehicle voice interaction searching method and device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210238276.8A CN114637487A (en) 2022-03-11 2022-03-11 Vehicle voice interaction searching method and device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN114637487A true CN114637487A (en) 2022-06-17

Family

ID=81947827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210238276.8A Pending CN114637487A (en) 2022-03-11 2022-03-11 Vehicle voice interaction searching method and device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN114637487A (en)

Similar Documents

Publication Publication Date Title
US10503747B2 (en) Method and apparatus for operating a user interface
EP0704675B1 (en) Hierarchical display system for vehicle navigation
CN100367185C (en) Method and apparatus for providing permission voice input in electronic equipment with user interface
CN101331036B (en) Input device and input method for mobile body
CN104978015B (en) Navigation system and its control method with languages self application function
US20160336009A1 (en) In-vehicle control apparatus and in-vehicle control method
US20130311417A1 (en) Operating device for in-vehicle information equipment
US9747579B2 (en) Enhanced user assistance
CN101557433A (en) Method for automatically switching operation profiles and relative moving device
CN104584118A (en) Multi-pass vehicle voice recognition systems and methods
JP4466379B2 (en) In-vehicle speech recognition device
WO2019114808A1 (en) Vehicle-mounted terminal device and display processing method for application component thereof
CN105103087B (en) It is a kind of to be used to showing and/or exporting the method for help and be allocated to the device of vehicle
CN109976515B (en) Information processing method, device, vehicle and computer readable storage medium
CN115329059A (en) Electronic specification retrieval method and device and vehicle machine
CN107885810A (en) The method and apparatus that result for vehicle intelligent equipment interactive voice is shown
JP2001216129A (en) Command input device
CN114637487A (en) Vehicle voice interaction searching method and device, vehicle and storage medium
CN101261132A (en) Method for accomplishing voice, key flash for prompting and guiding user for using navigation software function in map navigation product
Chang et al. Usability evaluation of a Volkswagen Group in-vehicle speech system
JP2003241784A (en) Speech input and output device
CN113721582B (en) Cabin system response efficiency testing method, equipment, storage medium and device
CN114666765A (en) Method and device for seeking vehicle use help from inside to outside of vehicle
JP6710893B2 (en) Electronics and programs
CN113961114A (en) Theme replacement method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination