CN116558536A - Vehicle navigation voice interaction method and device - Google Patents

Vehicle navigation voice interaction method and device Download PDF

Info

Publication number
CN116558536A
CN116558536A CN202310472800.2A CN202310472800A CN116558536A CN 116558536 A CN116558536 A CN 116558536A CN 202310472800 A CN202310472800 A CN 202310472800A CN 116558536 A CN116558536 A CN 116558536A
Authority
CN
China
Prior art keywords
user
voice
destination list
search result
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310472800.2A
Other languages
Chinese (zh)
Inventor
林孟超
陈彩可
李龙飞
张炜玮
卢杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Faw Beijing Software Technology Co ltd
FAW Group Corp
Original Assignee
Faw Beijing Software Technology Co ltd
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Faw Beijing Software Technology Co ltd, FAW Group Corp filed Critical Faw Beijing Software Technology Co ltd
Priority to CN202310472800.2A priority Critical patent/CN116558536A/en
Publication of CN116558536A publication Critical patent/CN116558536A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3608Destination input or retrieval using speech input, e.g. using speech recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech

Abstract

The application discloses a vehicle-mounted navigation voice interaction method, a device, electronic equipment, a storage medium and a vehicle, which comprise the steps of identifying a user voice request and generating a destination list according to a search result of the voice request; sorting the destination list according to the matching degree of the search result and the voice request; performing voice broadcasting on the first-ranked search results and requesting confirmation of a user; the first search results are performed or the destination list is reordered based on the results of the user confirmation or not. By the method, the number of times that a user views the screen can be reduced, the correct address can be selected, and driving safety is guaranteed.

Description

Vehicle navigation voice interaction method and device
Technical Field
The invention relates to the technical field of vehicle navigation, in particular to a vehicle navigation voice interaction method and device.
Background
The automobile navigation is that the signal sent by the navigator is connected with the satellite on the sky, the specific position of the automobile owner is detected and then fed back to the navigator, and the navigator can display the specific position on the display through the comparison with the map in the memory card.
The automobile navigation system is also an automobile GPS navigation system, and mainly comprises a host computer, a display screen, an operation keyboard and an antenna. Functions of the car navigation system: 1. a navigation function; 2. turning to a voice prompt function; 3. displaying tracks and positioning; 4. the specific speed of travel is measured. The automobile navigation has the function of a GPS global satellite positioning system, so that an automobile owner can know the exact position of the automobile owner at any time and any place when driving the automobile.
After a user inquires a destination, when the voice of the navigation broadcasting is not the destination which the user wants to go, the user needs to frequently check the screen to select the correct address, and potential safety hazards are easily caused.
Therefore, how to reduce the number of times the user views the screen can enable the user to select the correct address is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The invention provides a vehicle navigation voice interaction method and device, which are used for solving the problem that in the prior art, when the voice broadcasted by the vehicle navigation is not the destination which a user wants to go, the user needs to frequently check a screen to select a correct address, so that potential safety hazards are easily caused.
The invention provides a vehicle navigation voice interaction method for realizing the purpose, which comprises the following steps:
identifying a user voice request, and generating a destination list according to a search result of the voice request;
sorting the destination list according to the matching degree of the search result and the voice request;
performing voice broadcasting on the first-ranked search results and requesting confirmation of a user;
the first search results are performed or the destination list is reordered based on the results of the user confirmation or not.
In some of these embodiments, prior to identifying the user voice request, the method further comprises:
identifying facial features of a user and judging whether interaction conditions are met;
and when the interaction condition is met, continuing to identify the user voice request.
In some embodiments, identifying facial features of the user, determining whether interaction conditions are met specifically includes:
presetting an advanced age threshold;
identifying facial features of the user, and judging the age of the user according to the facial features;
judging whether the age of the user is higher than a preset age threshold value;
and when the age of the driver is higher than a preset age threshold value, the interaction condition is met.
In some embodiments, the first search results are executed or the destination list is reordered according to the result of user confirmation or not, which specifically includes:
when the user confirms, executing the first search result;
when the user refuses to confirm, the user sight is identified, and when the user sight is identified to look at the screen, the first search result is shifted out of the first position, and other search results in the destination list are reordered.
In some embodiments, the method further comprises, after reordering the destination list:
the top search results in the reordered destination list are highlighted bolded.
In some embodiments, the search results include at least one key information associated with the voice request.
Based on the same conception, the invention also provides a vehicle navigation voice interaction device, which comprises:
the generation module is used for identifying the voice request of the user and generating a destination list according to the search result of the voice request;
the matching module is used for sorting the destination list according to the matching degree of the search result and the voice request;
the voice module is used for voice broadcasting the search result arranged at the top and requesting the user to confirm;
and the execution module executes the first search result or reorders the destination list according to the result of the user confirmation or not.
Based on the same conception, the invention also provides an electronic device comprising: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; the memory stores a computer program which, when executed by the processor, causes the processor to execute the steps of the above-described vehicle navigation voice interaction method.
Based on the same idea, the present invention also provides a computer-readable storage medium storing a computer program executable by an electronic device, which when run on the electronic device, causes the electronic device to perform the steps of the above-described car navigation voice interaction method.
Based on the same conception, the invention also provides a vehicle, which specifically comprises:
the electronic equipment is used for realizing the vehicle navigation voice interaction method;
a processor that runs a program that, when run, performs the steps of the above-described car navigation voice interaction method on data output from the electronic device;
and a storage medium for storing a program that, when executed, performs the steps of the above-described car navigation voice interaction method on data output from the electronic device.
Compared with the prior art, the invention has the following beneficial effects:
the invention discloses a vehicle-mounted navigation voice interaction method, a device, electronic equipment, a storage medium and a vehicle, which comprise the steps of identifying a user voice request and generating a destination list according to a search result of the voice request; sorting the destination list according to the matching degree of the search result and the voice request; performing voice broadcasting on the first-ranked search results and requesting confirmation of a user; the first search results are performed or the destination list is reordered based on the results of the user confirmation or not. By the method, the number of times that a user views the screen can be reduced, the correct address can be selected, and driving safety is guaranteed.
Drawings
FIG. 1 is a schematic diagram of some embodiments of a vehicle navigation voice interaction method according to the present invention;
FIG. 2 is a schematic illustration of a normal mode interaction of a vehicle navigation voice interaction method of the present invention in some applications;
FIG. 3 is an interactive schematic diagram of a safety mode of a vehicle navigation voice interaction method of the present invention in some applications;
FIG. 4 is a schematic structural diagram of some embodiments of a vehicle navigation voice interaction device according to the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to some embodiments of the present invention.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, wherein it is apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe, these descriptions should not be limited to these terms. These terms are only used to distinguish one from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of embodiments of the present application.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or device comprising such element.
In particular, the symbols and/or numerals present in the description, if not marked in the description of the figures, are not numbered.
Referring to fig. 1, a vehicle navigation voice interaction method includes:
s101, recognizing a user voice request, and generating a destination list according to a search result of the voice request;
specifically, firstly, recognizing a user voice request, searching through the voice request, and finally generating a destination list through a search result;
in some of these applications, the wake-up word in the user's voice may be identified prior to identifying the user's voice request, and when the wake-up word is identified, the user's voice request continues to be identified.
S102, sorting a destination list according to the matching degree of the search result and the voice request;
specifically, matching is carried out according to the search result and the voice request, and the destination list is ordered according to the matching degree;
in some of these applications, the destination list is ordered from high to low according to the degree of matching of the search results and the voice request.
It can be appreciated that the matching degree may be based on the degree of correlation of keywords included in the user's voice request with the search results.
S103, carrying out voice broadcasting on the search result ranked at the top and requesting confirmation of the user;
specifically, broadcasting the content of the first search result to the user through voice, and requesting the user to confirm;
in some of these applications, requesting user confirmation may set a time interval, repeat the broadcast when the user has not responded for a period of time, and request user confirmation.
S104, according to the result of the user confirmation or not, executing the first search result or reordering the destination list.
In some of these applications, the first search results are executed when the user confirms, navigation is started, and the destination list is reordered when the user refuses to confirm, based on the results of the user confirmation or not.
It can be understood that when the user refuses to confirm, the first search result is judged not to be the address which the user wants to go at the moment, when the user manually selects, especially in the driving process, the first search result can influence the selection of the user, so that the user repeatedly views the screen, and potential safety hazards are caused.
In some embodiments of the present invention, in order to enable the elderly user to use navigation safely during driving, before recognizing the user voice request, the method further includes:
identifying facial features of a user and judging whether interaction conditions are met;
and when the interaction condition is met, continuing to identify the user voice request.
In some of these applications, the facial features of the user are recognized by the camera device, the age of the user is determined according to the facial features of the user, and when the age of the user satisfies the interaction condition, the user voice request is continuously recognized.
In some of these applications, the secure voice mode may be turned on when the interaction condition is met.
In some of these applications, the user may manually turn on the secure voice mode.
It can be understood that the aged user has reduced eyesight and reaction force, such as easily causing danger when repeatedly looking at the screen, the age range of the user can be judged through the facial features of the user, when the user is aged, the safe voice mode can be actively started, the volume of voice broadcasting can be increased in the safe voice mode, and the interval for requesting confirmation can be shortened.
In some embodiments of the present invention, in order to determine whether an interaction condition is satisfied according to a user age, identify facial features of the user, and determine whether the interaction condition is satisfied, the method specifically includes:
presetting an advanced age threshold;
identifying facial features of the user, and judging the age of the user according to the facial features;
judging whether the age of the user is higher than a preset age threshold value;
and when the age of the driver is higher than a preset age threshold value, the interaction condition is met.
In some of these applications, the age threshold may be set to 60 years, the range of the user's age may be determined based on facial features, such as identifying the user's age as between 65 years and 70 years, comparing the minimum of the range of ages to the age threshold, and satisfying the interaction condition when the minimum of the range of ages exceeds the age threshold, such as identifying the minimum of the range of ages as 65 years, 60 years above the age threshold.
In some embodiments of the present invention, to effectively reduce the number of times the user views the screen, the first search result is executed or the destination list is reordered according to the result of the user confirmation, which specifically includes:
when the user confirms, executing the first search result;
when the user refuses to confirm, the user sight is identified, and when the user sight is identified to look at the screen, the first search result is shifted out of the first position, and other search results in the destination list are reordered.
Specifically, when the user confirms, the first search result is executed, the navigation goes to the destination, when the user refuses to confirm, the user sight is identified, when the user sight looks to the screen, the first search result is moved out of the first, and other search results in the destination list are reordered.
In some applications, when the user refuses to confirm, the camera device is used for identifying the sight of the user, when the sight of the user looks at the screen, the first search result is moved out of the first position, or can be moved downwards, the moving distance can be preset, and then the destination list is rearranged according to the matching degree of the rest search results and the voice request, so that the user can quickly find the correct address when looking at the screen, and the frequency of looking at the screen by the user is reduced.
In some embodiments of the present invention, in order to clearly display the reordered list, the method further comprises, after reordering the destination list:
the top search results in the reordered destination list are highlighted bolded.
Specifically, when the list is reordered, the top search results are highlighted by thickening.
In some embodiments of the present invention, in order to make the content of the voice broadcast more comprehensive, the search result includes at least one key information related to the voice request.
It can be understood that the key information is information related to the voice request content, such as information of routes, traffic lights, congestion, etc., and the voice information is fully broadcast, so that the user can conveniently make judgment, and the number of times of viewing the screen is reduced.
Embodiments of the invention in some applications will be described below in conjunction with fig. 2 and 3, as shown in fig. 2 and 3,
1. judging the age of a driver through the facial features of the driver, and extracting the facial features of the driver by a camera in the vehicle;
judging the current age range of the driver through the facial features of the driver: agemin1-agemax1, e.g., 43-48 years old
Setting an advanced age threshold (modifiable by a user):
setting an age threshold value age for determining that the driver is an advanced driver, such as 65 years old
Step three: starting a safe driving voice interaction mode:
and when the agemin1> age, starting a safe driving voice interaction mode. (whether this mode is automatically on or off can be manually turned on/off by the vehicle owner)
Safe driving voice interaction mode operation:
2. and selecting nodes for multiple rounds of conversations, increasing the information quantity of voice feedback, and reducing the frequency/duration of a user for viewing a screen.
1. In the normal mode:
at the selection node of the multi-round interaction, the voice combines with the GUI to feed back information to the user. For example, the user sends an instruction of "navigate to Beijing zoo", the system searches by using "Beijing zoo" as a keyword, and displays the search result on the page, and simultaneously prompts the user to select "please select for you find the places". At the moment, a driver is required to check the search result displayed on the screen and select a place to go to, so that the sight is removed during driving, and the driving safety hidden trouble is caused;
multiple round dialog example:
the user: i want to navigate
A voice assistant: where do you go? -interrogating node
The user: beijing zoo
Voice assistant: please select for you find these places. -selecting a node
[ normal procedure ] user: first (Screen to be checked)
A voice assistant: for you to find three routes, what do you walk? -selecting a node
The user: first strip (Screen to be checked)
A voice assistant: preferably, we now begin navigating to Beijing zoos for you
2. The safety mode is as follows:
at the selection node of the multi-round dialogue, the system searches and sorts the place names in the voice command of the user as keywords, and broadcasts the place information sorted at the first place by voice, thereby reducing the dependence on the GUI. For example, the user sends an instruction "navigate to Beijing zoo", the system searches and sorts the search results by taking the "Beijing zoo" as a keyword, displays the search results on the page (the key information of the first m (settable) -1 options (in this navigation example, place name, address, driving time, driving distance, traffic light number) is thickened and highlighted), and simultaneously, voice broadcasts the detailed information of the place most likely to be wanted by the user (namely, the result of sorting the first place in the display result list), so that the user can confirm that "find Beijing zoo located at [ detailed address ] for you, distance from XX km for you, whether to go? If the recommended place is the place the user wants to go to, the user can directly confirm the going to without looking at the screen, and potential safety hazards possibly caused by the fact that the line of sight is moved away in driving are avoided.
If the user gives a negative voice response directly, and the in-car camera detects the user's gaze turning screen (indicating that the system recommended option is not intended by the user), the GUI page drops the original search result first option to the n (settable) position (e.g., third position), while the key information of the original 2,3,..m (settable, m < = n, such as m=3) option presented by the GUI page (in this navigation example, place name, address, driving time, driving distance, traffic light number) is bolded and highlighted, facilitating the user to view the screen in a short time and select the desired result.
If the search results are based on the keyword "Beijing zoo":
no. 137 16 km 30 min traffic lights 10 of the Beijing zoo Beijing city western urban area outside direct gate
Beijing zoo Beijing city, western urban area, western straight extrados, 137 th 15 km 28 min traffic lights 8
No. 137 traffic lights for 19 km and 35 minutes for 12 traffic lights in the outside of the direct gate of the western urban area of the Beijing city of the Beijing zoo Siemens
...
Reordering the results:
beijing zoo Beijing city, western urban area, western straight extrados, 137 th 15 km 28 min traffic lights 8
No. 137 traffic lights for 19 km and 35 minutes for 12 traffic lights in the outside of the direct gate of the western urban area of the Beijing city of the Beijing zoo Siemens
No. 137 16 km 30 min traffic lights 10 of the Beijing zoo Beijing city western urban area outside direct gate
...
Multi-round dialog example 1:
the user: i want to navigate
A voice assistant: where do you go? -interrogating node
The user: beijing zoo
[ Security mode ] Voice Assistant: beijing zoo located [ detailed address ] for you, distance you XX km, do you go to? -selecting a node
[ Security mode ] user: go to (without looking at the screen)
A voice assistant: for you to find the fastest route, it is expected that XX minutes are needed, do you go? -selecting a node
The user: departure (Screen can not be seen)
A voice assistant: preferably, we now begin navigating to Beijing zoos for you
Multi-round dialog example 2:
the user: i want to navigate
A voice assistant: where do you go? -interrogating node
The user: beijing zoo
[ Security mode ] Voice Assistant: beijing zoo located [ detailed address ] for you, distance you XX km, do you go to? -selecting a node
(the user is not replied to and the line of sight looks at the screen)
[ safe mode ] (System GUI display results not reordered)
[ Security mode ] user: first (need to view screen, but reduce the time to make decisions to view screen)
A voice assistant: for you to find the fastest route, it is expected that XX minutes are needed, do you go? -selecting a node
The user: departure (Screen can not be seen)
A voice assistant: preferably, we now begin navigating to Beijing zoos for you
Multi-round dialog example 3:
the user: i want to navigate
A voice assistant: where do you go? -interrogating node
The user: beijing zoo
[ Security mode ] Voice Assistant: beijing zoo located [ detailed address ] for you, distance you XX km, do you go to? -selecting a node
[ Security mode ] user: without this (turning around to see the screen at the same time)
[ Security mode ] Voice Assistant: preferably, please select the place you need to go to
[ safety mode ] (System GUI display results are reordered according to the rules and designated results are highlighted by thickening)
[ Security mode ] user: first (need to view screen, but reduce the time to make decisions to view screen)
A voice assistant: for you to find the fastest route, it is expected that XX minutes are needed, do you go? -selecting a node
The user: departure (Screen can not be seen)
A voice assistant: preferably, we now begin navigating to Beijing zoo North door for you
For the purposes of simplicity of explanation, the method steps disclosed in the above embodiments are depicted as a series of acts in a combination, but it should be understood by those skilled in the art that the embodiments of the present invention are not limited by the order of acts described, as some steps may occur in other order or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Any process or method description that is flow chart or otherwise described may be understood as: means, segments, or portions of code representing executable instructions including one or more steps of a particular logic function or procedure are illustrated, and the scope of the preferred embodiment of the present invention includes additional implementations in which functions may be executed out of order from that shown or discussed, including performing the functions in a substantially simultaneous manner or in an inverse order, or executing computer instructions in a loop, branch, etc. program structure and implementing the corresponding functions, depending on the function involved, as would be understood by those skilled in the art in practicing the embodiments of the present invention.
As shown in fig. 4, based on the same concept, the present invention further provides a vehicle navigation voice interaction device, including:
a generating module 201, configured to identify a user voice request, and generate a destination list according to a search result of the voice request;
a matching module 202, configured to sort the destination list according to the matching degree of the search result and the voice request;
the voice module 203 is configured to perform voice broadcast on the first-ranked search result and request user confirmation;
the execution module 204 performs a first search result or reorders the destination list based on the result of the user confirmation or not.
Specifically, the vehicle navigation voice interaction device provided by the invention comprises a generation module 201, a matching module 202, a voice module 203 and an execution module 204, wherein the generation module 201 is used for recognizing a user voice request and generating a destination list according to a search result of the voice request; a matching module 202, configured to sort the destination list according to the matching degree of the search result and the voice request; the voice module 203 is configured to perform voice broadcast on the first-ranked search result and request user confirmation; the execution module 204 performs a first search result or reorders the destination list based on the result of the user confirmation or not.
It should be noted that, although only some basic functional modules are disclosed in the embodiment of the present invention, the composition of the present system is not meant to be limited to the above basic functional modules, but rather, the present embodiment is meant to express: one skilled in the art can add one or more functional modules to the basic functional module to form an infinite number of embodiments or technical solutions, that is, the system is open rather than closed, and the scope of protection of the claims is not limited to the disclosed basic functional module because the present embodiment only discloses individual basic functional modules. Meanwhile, for convenience of description, the above devices are described as being functionally divided into various units and modules, respectively. Of course, the functions of the units, modules may be implemented in one or more pieces of software and/or hardware when implementing the invention.
The embodiments of the system described above are merely illustrative, for example: wherein each functional module, unit, subsystem, etc. in the system may or may not be physically separate, or may not be a physical unit, i.e. may be located in the same place, or may be distributed over a plurality of different systems and subsystems or modules thereof. Those skilled in the art may select some or all of the functional modules, units or subsystems according to actual needs to achieve the purposes of the embodiments of the present invention, and in this case, those skilled in the art may understand and implement the present invention without any inventive effort.
As shown in fig. 5, based on the same concept, the present invention also provides an electronic device including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; the memory stores a computer program which, when executed by the processor, causes the processor to perform the steps of the vehicle voice interaction method described above.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-Programmable gate arrays (FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The communication bus mentioned above for the electronic device may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry StandardArchitecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The electronic device includes a hardware layer, an operating system layer running on top of the hardware layer, and an application layer running on top of the operating system. The hardware layer includes hardware such as a central processing unit (CPU, central Processing Unit), a memory management unit (MMU, memory Management Unit), and a memory. The operating system may be any one or more computer operating systems that implement electronic device control via processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system, etc. In addition, in the embodiment of the present invention, the electronic device may be a handheld device such as a smart phone, a tablet computer, or an electronic device such as a desktop computer, a portable computer, which is not particularly limited in the embodiment of the present invention.
The execution body controlled by the electronic device in the embodiment of the invention can be the electronic device or a functional module in the electronic device, which can call a program and execute the program. The electronic device may obtain firmware corresponding to the storage medium, where the firmware corresponding to the storage medium is provided by the vendor, and the firmware corresponding to different storage media may be the same or different, which is not limited herein. After the electronic device obtains the firmware corresponding to the storage medium, the firmware corresponding to the storage medium can be written into the storage medium, specifically, the firmware corresponding to the storage medium is burned into the storage medium. The process of burning the firmware into the storage medium may be implemented by using the prior art, and will not be described in detail in the embodiment of the present invention.
The electronic device may further obtain a reset command corresponding to the storage medium, where the reset command corresponding to the storage medium is provided by the provider, and the reset commands corresponding to different storage media may be the same or different, which is not limited herein.
At this time, the storage medium of the electronic device is a storage medium in which the corresponding firmware is written, and the electronic device may respond to a reset command corresponding to the storage medium in which the corresponding firmware is written, so that the electronic device resets the storage medium in which the corresponding firmware is written according to the reset command corresponding to the storage medium. The process of resetting the storage medium according to the reset command may be implemented in the prior art, and will not be described in detail in the embodiments of the present invention.
Based on the same idea, the present invention also provides a computer-readable storage medium storing a computer program executable by an electronic device, which when run on the electronic device, causes the electronic device to perform the steps of the above-described car navigation voice interaction method.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Based on the same conception, the invention also provides a vehicle, which specifically comprises:
the electronic equipment is used for realizing the vehicle navigation voice interaction method;
a processor that runs a program that, when run, performs the steps of the above-described car navigation voice interaction method on data output from the electronic device;
and a storage medium for storing a program that, when executed, performs the steps of the above-described car navigation voice interaction method on data output from the electronic device.
Specifically, the electronic device, the processor, and the storage medium in the present embodiment refer to the above-described embodiments.
By applying the technical scheme, the vehicle-mounted navigation voice interaction method, the device, the electronic equipment, the storage medium and the vehicle comprise the steps of identifying a user voice request and generating a destination list according to a search result of the voice request; sorting the destination list according to the matching degree of the search result and the voice request; performing voice broadcasting on the first-ranked search results and requesting confirmation of a user; the first search results are performed or the destination list is reordered based on the results of the user confirmation or not. By the method, the number of times that a user views the screen can be reduced, the correct address can be selected, and driving safety is guaranteed.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example: any of the embodiments claimed in the claims may be used in any combination of the embodiments of the invention.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In addition, the technical solutions of the embodiments of the present invention may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can implement the technical solutions, and when the technical solutions are contradictory or cannot be implemented, the combination of the technical solutions should be considered as not existing, and not falling within the scope of protection claimed by the present invention.
All of the features disclosed in this specification, or all of the steps in a method or process disclosed, may be combined in any combination, except for mutually exclusive features and/or steps. Any feature disclosed in this specification may be replaced by alternative features serving the same or equivalent purpose, unless expressly stated otherwise. That is, each feature is one example only of a generic series of equivalent or similar features, unless expressly stated otherwise. Like reference numerals refer to like elements throughout the specification.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including the corresponding claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including the corresponding claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (10)

1. The vehicle navigation voice interaction method is characterized by comprising the following steps of:
identifying a user voice request, and generating a destination list according to a search result of the voice request;
sorting the destination list according to the matching degree of the search result and the voice request;
performing voice broadcasting on the first-ranked search results and requesting confirmation of a user;
the first search results are performed or the destination list is reordered based on the results of the user confirmation or not.
2. The car navigation voice interaction method according to claim 1, wherein before recognizing the user voice request, the method further comprises:
identifying facial features of a user and judging whether interaction conditions are met;
and when the interaction condition is met, continuing to identify the user voice request.
3. The vehicle-mounted navigation voice interaction method according to claim 2, wherein the steps of recognizing facial features of a user and judging whether interaction conditions are satisfied include:
presetting an advanced age threshold;
identifying facial features of the user, and judging the age of the user according to the facial features;
judging whether the age of the user is higher than a preset age threshold value;
and when the age of the driver is higher than a preset age threshold value, the interaction condition is met.
4. The car navigation voice interaction method according to claim 1, wherein the first search result is executed or the destination list is reordered according to the result of user confirmation or not, specifically comprising:
when the user confirms, executing the first search result;
when the user refuses to confirm, the user sight is identified, and when the user sight is identified to look at the screen, the first search result is shifted out of the first position, and other search results in the destination list are reordered.
5. The car navigation voice interaction method according to claim 4, wherein after reordering the destination list, the method further comprises:
the top search results in the reordered destination list are highlighted bolded.
6. The car navigation voice interaction method according to claim 1, wherein the search result includes at least one key information related to the voice request.
7. A vehicle-mounted navigation voice interaction device, characterized by comprising:
the generation module is used for identifying the voice request of the user and generating a destination list according to the search result of the voice request;
the matching module is used for sorting the destination list according to the matching degree of the search result and the voice request;
the voice module is used for voice broadcasting the search result arranged at the top and requesting the user to confirm;
and the execution module executes the first search result or reorders the destination list according to the result of the user confirmation or not.
8. An electronic device, comprising: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; the memory has stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 6.
9. A computer readable storage medium, characterized in that it stores a computer program executable by an electronic device, which, when run on the electronic device, causes the electronic device to perform the steps of the method of any one of claims 1 to 6.
10. A vehicle, characterized by comprising:
an electronic device for implementing the car navigation voice interaction method of any one of claims 1 to 6;
a processor that runs a program that, when run, performs the steps of the car navigation voice interaction method according to any one of claims 1 to 6 on data output from the electronic device;
a storage medium storing a program that, when executed, performs the steps of the car navigation voice interaction method according to any one of claims 1 to 6 on data output from an electronic device.
CN202310472800.2A 2023-04-27 2023-04-27 Vehicle navigation voice interaction method and device Pending CN116558536A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310472800.2A CN116558536A (en) 2023-04-27 2023-04-27 Vehicle navigation voice interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310472800.2A CN116558536A (en) 2023-04-27 2023-04-27 Vehicle navigation voice interaction method and device

Publications (1)

Publication Number Publication Date
CN116558536A true CN116558536A (en) 2023-08-08

Family

ID=87495811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310472800.2A Pending CN116558536A (en) 2023-04-27 2023-04-27 Vehicle navigation voice interaction method and device

Country Status (1)

Country Link
CN (1) CN116558536A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877576A (en) * 2005-06-06 2006-12-13 汤姆森许可贸易公司 Method and device for searching a data unit in a database
CN101960455A (en) * 2008-03-12 2011-01-26 雅虎公司 System, method, and/or apparatus for reordering search results
CN103699023A (en) * 2013-11-29 2014-04-02 安徽科大讯飞信息科技股份有限公司 Multi-candidate POI (Point of Interest) control method and system of vehicle-mounted equipment
CN104321622A (en) * 2012-06-05 2015-01-28 苹果公司 Context-aware voice guidance
KR20190058918A (en) * 2017-11-22 2019-05-30 현대자동차주식회사 Apparatus and method for processing voice command of vehicle
CN110908718A (en) * 2018-09-14 2020-03-24 上海擎感智能科技有限公司 Face recognition activated voice navigation method, system, storage medium and equipment
CN112212880A (en) * 2020-09-27 2021-01-12 上汽通用五菱汽车股份有限公司 Voice navigation method, screen-free vehicle-mounted equipment, system and readable storage medium
CN112820284A (en) * 2020-12-28 2021-05-18 恒大新能源汽车投资控股集团有限公司 Voice interaction method and device, electronic equipment and computer readable storage medium
CN113421565A (en) * 2021-07-19 2021-09-21 北京百度网讯科技有限公司 Search method, search device, electronic equipment and storage medium
CN114461281A (en) * 2021-12-30 2022-05-10 惠州华阳通用智慧车载系统开发有限公司 Vehicle machine mode switching method
CN114865751A (en) * 2022-06-11 2022-08-05 湖州物物通科技有限公司 Charging reminding method, system and storage medium
CN115230713A (en) * 2021-04-23 2022-10-25 前海七剑科技(深圳)有限公司 Driving assistance method and device, electronic equipment and storage medium
CN115547332A (en) * 2022-09-22 2022-12-30 中国第一汽车股份有限公司 Sight attention-based awakening-free intention recall method and system and vehicle

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877576A (en) * 2005-06-06 2006-12-13 汤姆森许可贸易公司 Method and device for searching a data unit in a database
CN101960455A (en) * 2008-03-12 2011-01-26 雅虎公司 System, method, and/or apparatus for reordering search results
CN104321622A (en) * 2012-06-05 2015-01-28 苹果公司 Context-aware voice guidance
CN103699023A (en) * 2013-11-29 2014-04-02 安徽科大讯飞信息科技股份有限公司 Multi-candidate POI (Point of Interest) control method and system of vehicle-mounted equipment
KR20190058918A (en) * 2017-11-22 2019-05-30 현대자동차주식회사 Apparatus and method for processing voice command of vehicle
CN110908718A (en) * 2018-09-14 2020-03-24 上海擎感智能科技有限公司 Face recognition activated voice navigation method, system, storage medium and equipment
CN112212880A (en) * 2020-09-27 2021-01-12 上汽通用五菱汽车股份有限公司 Voice navigation method, screen-free vehicle-mounted equipment, system and readable storage medium
CN112820284A (en) * 2020-12-28 2021-05-18 恒大新能源汽车投资控股集团有限公司 Voice interaction method and device, electronic equipment and computer readable storage medium
CN115230713A (en) * 2021-04-23 2022-10-25 前海七剑科技(深圳)有限公司 Driving assistance method and device, electronic equipment and storage medium
CN113421565A (en) * 2021-07-19 2021-09-21 北京百度网讯科技有限公司 Search method, search device, electronic equipment and storage medium
CN114461281A (en) * 2021-12-30 2022-05-10 惠州华阳通用智慧车载系统开发有限公司 Vehicle machine mode switching method
CN114865751A (en) * 2022-06-11 2022-08-05 湖州物物通科技有限公司 Charging reminding method, system and storage medium
CN115547332A (en) * 2022-09-22 2022-12-30 中国第一汽车股份有限公司 Sight attention-based awakening-free intention recall method and system and vehicle

Similar Documents

Publication Publication Date Title
RU2683891C2 (en) System (options) and method for selecting parking space for vehicle
US11173927B2 (en) Method, apparatus, computer device and storage medium for autonomous driving determination
JP6494782B2 (en) Notification control device and notification control method
CN111460068A (en) Interest point searching method, readable storage medium and electronic device
US20160018230A1 (en) Multiple destination vehicle interface
US10992809B2 (en) Information providing method, information providing system, and information providing device
US10866107B2 (en) Navigation system
US10916083B2 (en) Vehicle exit management system and gate terminal
CN110930765B (en) Early warning priority determination method, device, storage medium and device
US20230106421A1 (en) Method For Navigating Vehicle And Electronic Device
CN112820284A (en) Voice interaction method and device, electronic equipment and computer readable storage medium
CN110750279A (en) Vehicle-mounted system upgrading method and system, vehicle and storage medium
CN113608628A (en) Interest point input method, device, equipment and storage medium
CN109308674B (en) Order address processing method and device and terminal equipment
CN116558536A (en) Vehicle navigation voice interaction method and device
US20150149068A1 (en) Methods and systems for auto predicting using a navigation system
CN114666765A (en) Method and device for seeking vehicle use help from inside to outside of vehicle
CN111915913A (en) Severe weather prompting method, prompting system, storage medium and electronic device
US9201926B2 (en) Integrated travel services
CN115691203A (en) Urban road berth induction method, device, equipment and readable storage medium
CN110309375B (en) Information prompting method and device and vehicle-mounted terminal equipment
CN113032663A (en) Parking lot recommendation method, recommendation system, storage medium and electronic device
CN111883116A (en) Voice interaction method, voice interaction device, server and voice navigation system
CN113739816A (en) Vehicle navigation control method and device and vehicle
US9915549B2 (en) Information processing apparatus, information processing method, and program causing computer to execute processing in information processing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination