CN114812591A - Voice navigation method, electronic equipment and computer program product - Google Patents

Voice navigation method, electronic equipment and computer program product Download PDF

Info

Publication number
CN114812591A
CN114812591A CN202210249291.2A CN202210249291A CN114812591A CN 114812591 A CN114812591 A CN 114812591A CN 202210249291 A CN202210249291 A CN 202210249291A CN 114812591 A CN114812591 A CN 114812591A
Authority
CN
China
Prior art keywords
voice
voice content
navigated object
navigation
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210249291.2A
Other languages
Chinese (zh)
Inventor
卞智
唐俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Autonavi Software Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN202210249291.2A priority Critical patent/CN114812591A/en
Publication of CN114812591A publication Critical patent/CN114812591A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3608Destination input or retrieval using speech input, e.g. using speech recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the disclosure discloses a voice navigation method, electronic equipment and a computer program product, wherein the method comprises the following steps: acquiring a navigation route and voice navigation content; the voice navigation content comprises voice content and voice broadcasting time; broadcasting the voice content based on the voice broadcasting opportunity in the process that the navigated object runs based on the navigation route; after the voice content is broadcasted, dynamically adjusting the voice content of the next voice broadcasting opportunity based on the behavior feedback information of the navigated object. According to the technical scheme, the problem that the user can easily yaw by adopting the existing fixed voice broadcasting opportunity and the fixed voice content under the condition that the attention of the user is not concentrated in the driving process can be avoided, the yaw rate of the user can be reduced, and the navigation experience of the user is improved.

Description

Voice navigation method, electronic equipment and computer program product
Technical Field
The present disclosure relates to the field of navigation technologies, and in particular, to a voice navigation method, an electronic device, and a computer program product.
Background
With the development of internet technology, people's travel is more and more dependent on a location-based service system. Location-based services include navigation, path planning, map rendering, and the like. The navigation service is used for guiding the user in the vehicle driving process, for example, voice broadcasting of navigation contents is performed to prompt the user to perform corresponding navigation actions and the like.
The inventor of the present disclosure finds that, in the existing voice navigation technology, after a navigation route is planned, the voice content and the voice broadcast time are relatively fixed, and whether the user receives the key information in the voice content is not considered, so that yaw is easy to occur, and especially in scenes such as a high-speed entrance and exit, the yaw may cause serious consequences, such as a long detour, a long time spent on returning to the navigation route, and the like.
Therefore, in order to improve the navigation experience, it is necessary to provide a solution to the above technical problem, thereby reducing the yaw rate of the navigated object.
Disclosure of Invention
The embodiment of the disclosure provides a voice navigation method, electronic equipment and a computer program product.
In a first aspect, an embodiment of the present disclosure provides a voice navigation method, where the method includes:
acquiring a navigation route and voice navigation content; the voice navigation content comprises voice content and voice broadcasting time;
broadcasting the voice content based on the voice broadcasting opportunity in the process that the navigated object runs based on the navigation route;
after the voice content is broadcasted, dynamically adjusting the voice content of the next voice broadcasting opportunity based on the behavior feedback information of the navigated object.
Further, after having reported the voice content, based on the action feedback information dynamic adjustment next voice broadcast speech content of opportunity of the object of being navigated includes:
after the voice content is broadcasted, detecting behavior feedback information of the navigated object;
judging whether the navigated object notices key information in the voice content or not based on the behavior feedback information;
and after determining that the navigated object does not notice the voice content, selecting the voice content comprising the key information at the next voice broadcasting opportunity to broadcast.
Further, determining whether the navigated object notices key information in the voice content based on the behavior feedback information includes:
after the voice content is broadcasted, whether the navigated object notices key information in the voice content is determined based on the matching relation between the behavior feedback information of the navigated object and the navigation action corresponding to the voice content.
Further, determining whether the navigated object notices key information in the voice content based on a matching relationship between the behavior feedback information of the navigated object and the navigation action corresponding to the voice content includes:
when the voice content relates to a navigation action for guiding the navigated object to turn or travel at a complex intersection, determining whether the navigated object notices key information in the voice content based on the travel speed of the navigated object.
Further, when the voice content relates to a navigation action for guiding the navigated object to turn or travel at a complex intersection, determining whether the navigated object notices key information in the voice content based on the travel speed of the navigated object comprises:
and when the voice content relates to guiding the navigated object to turn, and the traveling speed of the navigated object is not reduced, determining that the navigated object does not notice key information in the voice content.
Further, when the voice content relates to a navigation action for guiding the navigated object to turn or travel at a complex intersection, determining whether the navigated object notices key information in the voice content based on the travel speed of the navigated object comprises:
when the voice content relates to a navigation action for guiding the navigated object to drive at a complex intersection, if the deceleration amplitude of the navigated object is greater than or equal to a preset speed value, determining that the navigated object notices the voice navigation content.
Further, determining whether the navigated object notices key information in the voice content based on a matching relationship between the behavior feedback information of the navigated object and the navigation action corresponding to the voice content includes:
when the voice content relates to a navigation action for guiding the navigated object to travel in a target lane, judging whether the navigated object is switched to the target lane based on the positioning information of the navigated object, and determining whether the navigated object notices key information in the voice content based on the judgment result.
Further, after having reported the voice content, based on the action feedback information dynamic adjustment next voice broadcast speech content of opportunity of the object of being navigated includes:
and after the voice content is broadcasted, adding a next voice broadcasting opportunity and a next voice content corresponding to the next voice broadcasting opportunity, wherein the added next voice content at least comprises key information in the voice content once.
In a second aspect, an embodiment of the present invention provides a voice navigation apparatus, including:
the acquisition module is configured to acquire a navigation route and voice navigation content; the voice navigation content comprises voice content and voice broadcasting time;
the broadcasting module is configured to broadcast the voice content based on the voice broadcasting opportunity in the process that the navigated object runs based on the navigation route;
and the adjusting module is configured to dynamically adjust the voice content of the next voice broadcasting opportunity based on the behavior feedback information of the navigated object after the voice content is broadcasted.
The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the apparatus includes a memory configured to store one or more computer instructions that enable the apparatus to perform the corresponding method, and a processor configured to execute the computer instructions stored in the memory. The apparatus may also include a communication interface for the apparatus to communicate with other devices or a communication network.
In a third aspect, the disclosed embodiments provide an electronic device, comprising a memory, a processor, and a computer program stored on the memory, wherein the processor executes the computer program to implement the method of any one of the above aspects.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium for storing computer instructions for use by any one of the above apparatuses, the computer instructions, when executed by a processor, being configured to implement the method of any one of the above aspects.
In a fifth aspect, the disclosed embodiments provide a computer program product comprising computer instructions that, when executed by a processor, implement the method of any one of the above aspects.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the voice navigation, after the voice content corresponding to one voice broadcasting opportunity is broadcasted, the behavior feedback information of the navigated object is detected in real time, and the voice content corresponding to the next voice broadcasting opportunity is dynamically adjusted based on the behavior feedback information. Through the embodiment of the disclosure, the problem that the user can easily yaw by adopting the existing fixed voice broadcasting opportunity and the fixed voice content under the condition that the attention of the user is not concentrated in the driving process can be avoided, the yaw rate of the user can be reduced, and the navigation experience of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 illustrates a flow diagram of a voice navigation method according to an embodiment of the present disclosure;
FIG. 2 shows a voice navigation diagram in an application scenario where a user drives an intersection near which a turn is required according to an embodiment of the present disclosure;
FIG. 3 illustrates a voice navigation diagram in an application scenario where a user drives near a complex intersection according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating voice navigation in an application scenario where a user needs to switch lanes during driving according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating an application scenario of voice navigation according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of a voice navigation device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device suitable for implementing a voice navigation method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, actions, components, parts, or combinations thereof, and do not preclude the possibility that one or more other features, numbers, steps, actions, components, parts, or combinations thereof are present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
The details of the embodiments of the present disclosure are described in detail below with reference to specific embodiments.
Fig. 1 shows a flow chart of a voice navigation method according to an embodiment of the present disclosure. As shown in fig. 1, the voice navigation method includes the following steps:
in step S101, a navigation route and voice navigation content are acquired; the voice navigation content comprises voice content and voice broadcasting time;
in step S102, in the process that the navigated object travels based on the navigation route, broadcasting the voice content based on the voice broadcasting opportunity;
in step S103, after the voice content is broadcasted, the voice content at the next voice broadcasting opportunity is dynamically adjusted based on the behavior feedback information of the navigated object.
In this embodiment, the voice navigation method is executed at the navigation terminal. The voice navigation can be understood as voice-controlled navigation, which adopts representative intelligent voice technologies such as voice recognition, voice encoding and decoding, and the like, and is generally applied to vehicles, user terminals, and the like. The voice content can be understood as that in the navigation process of the navigated object, information such as the driving direction, the lane and the like of the navigated object is reminded in a voice interaction mode, namely the voice content mainly relates to guiding a navigation action which needs to be executed by a user at the upcoming moment; the voice broadcasting time can be a fixed GPS position point, after the navigation route is planned, the voice content and the voice broadcasting time are basically fixed, the navigation terminal can be matched with the voice broadcasting time based on the current positioning information of the navigated object, and the voice content corresponding to the voice broadcasting time is broadcasted in a voice mode when the navigation terminal is matched with the voice broadcasting time. It can be understood that each voice broadcast occasion may correspond to one or more candidate broadcast contents, and the navigation terminal may select one or more voice contents to broadcast randomly or according to a preset rule.
The navigated object may include a person, and the navigation terminal may include, but is not limited to, a cell phone, ipad, computer, smart watch, vehicle, robot, etc. The navigation server plans a navigation route based on the starting point and the end point and then sends the navigation route and navigation content to the navigated object. In voice navigation, the navigation content may include, but is not limited to, voice content and a voice broadcast occasion; the voice content and the voice broadcasting opportunity are correspondingly set.
And in the process that the navigated object runs based on the navigation route, the navigation terminal detects the current positioning information of the navigated object in real time, matches the positioning information with the voice broadcasting time in the voice navigation content, and broadcasts the corresponding voice content after the matching is successful.
In the embodiment of the present disclosure, each time after the voice content is reported, the navigation terminal further detects behavior feedback information of the navigated object, where the behavior feedback information may be understood as information fed back by the driving behavior of the navigated object, including but not limited to speed change information, driving direction change information, positioning information, and the like of the navigated object. The behavior feedback information of the navigated object can be detected by a vehicle driven by the navigated object or a portable handheld terminal. Whether the navigated object executes the corresponding behavior based on the currently played voice content can be judged through the behavior feedback information, if the corresponding behavior is executed, the navigated object can be considered to receive the voice content, otherwise, the navigated object can be considered not to receive the voice content. In the embodiment of the present disclosure, whether the navigated object receives the voice content may be understood as whether the user notices the current voice content and performs a corresponding action based on the voice content.
In some embodiments, the user may not be focused enough while driving, and may not be able to extract valid information from the voice content played by the navigated object, thereby easily causing a yaw to occur.
Therefore, the voice content of the next voice broadcasting opportunity can be dynamically adjusted based on the behavior feedback information of the navigated object, and the key information ignored by the user can be emphasized in the adjusted voice content, so that the yaw rate of the user is reduced by repeatedly reminding the user of the key information.
The following examples illustrate: if the corresponding voice content is played at the first voice broadcasting opportunity, the navigated object does not show corresponding driving behaviors, such as speed reduction, direction change and the like, based on the voice content; at this time, the voice content of the next voice broadcasting opportunity in the navigation can be adjusted, and if the candidate content containing the key information is selected from the plurality of voice contents corresponding to the next voice broadcasting opportunity in the predetermined voice navigation content for repeated broadcasting, other candidate contents are not broadcasted any more.
In the voice navigation, after the voice content corresponding to one voice broadcasting opportunity is broadcasted, the behavior feedback information of the navigated object is detected in real time, and the voice content corresponding to the next voice broadcasting opportunity is dynamically adjusted based on the behavior feedback information. Through the embodiment of the disclosure, the problem that the user can easily yaw by adopting the existing fixed voice broadcasting opportunity and the fixed voice content under the condition that the attention of the user is not concentrated in the driving process can be avoided, the yaw rate of the user can be reduced, and the navigation experience of the user is improved.
In an optional implementation manner of this embodiment, step S103, namely after broadcasting the voice content, dynamically adjusting the voice content at the next voice broadcasting opportunity based on the behavior feedback information of the navigated object, further includes the following steps:
after the voice content is broadcasted, detecting behavior feedback information of the navigated object;
judging whether the navigated object notices key information in the voice content or not based on the behavior feedback information;
and after determining that the navigated object does not notice the voice content, selecting the voice content comprising the key information at the next voice broadcasting opportunity to broadcast.
In the optional implementation mode, in a voice navigation scene, after the navigation server plans the navigation route, the planned navigation route and the corresponding voice navigation content are sent to the navigation terminal of the navigated object, the navigation terminal provides voice navigation service for the navigated object based on the real-time positioning information, the navigation route and the voice navigation content of the navigated object, the navigation terminal matches the real-time positioning information of the navigated object with the position point in the navigation route, and when the position point with the voice content is matched, the corresponding voice content is broadcasted in a voice mode.
The embodiment of the present disclosure considers that after the broadcasting of the voice content, the navigated object may not really receive the voice content due to the lack of attention and the like, so the behavior feedback information of the navigated object can be detected after the broadcasting of the voice content, and the behavior feedback information can include, but is not limited to, the speed change, the driving direction change and the like of the navigated object.
After determining the behavior feedback information of the navigated object, it can be determined whether the navigated object notices key information in the voice content based on the behavior feedback information, such as prompting the user to exit at a next highway junction and prompting the user to change lanes in advance in the voice content; however, the detected behavior feedback information indicates that the navigated object does not make any obvious behavior change after the voice content is played, such as no speed reduction, no direction change, etc., and at this time, it can be determined that the navigated object does not notice the key information in the voice content; conversely, if the detected behavior feedback information indicates that the navigated object has made a corresponding behavior change, such as a speed reduction, a direction change, etc., after the voice content is played, it can be determined that the navigated object has noticed key information in the voice content.
After the navigated object is determined not to notice the voice content, the voice content which only comprises the key information or can emphasize the key information can be selected for broadcasting at the next voice broadcasting occasion, so that the navigated object is reminded to pay attention to the key information. After the navigated object is determined to pay attention to the voice content, the voice content to be broadcasted can be selected at the next voice broadcasting occasion according to a conventional manner, and the voice content only including the key information or capable of emphasizing the key information is not necessarily selected, or even the voice content not including the key information can be selected, which is specifically determined according to the set selection manner of the voice content, and is not specifically limited herein.
In an optional implementation manner of this embodiment, the step of determining whether the navigated object notices key information in the voice content based on the behavior feedback information further includes the following steps:
after the voice content is broadcasted, whether the navigated object notices key information in the voice content is determined based on the matching relation between the behavior feedback information of the navigated object and the navigation action corresponding to the voice content.
In this optional implementation manner, as described above, the navigation server pushes the navigation route and the voice navigation content to the navigation terminal, where the voice navigation content includes the voice content and the voice broadcast time for playing the voice content.
After the voice content is played at the voice broadcasting opportunity, whether the voice content at the next voice broadcasting opportunity is adjusted or not is determined based on behavior feedback information of the navigated object.
In this embodiment, the navigated object may determine whether the navigated object notices key information in the voice content based on a matching relationship between the behavior feedback information of the navigated object and the navigation actions involved in the voice content. The behavior feedback information can be measured from at least two aspects, such as speed change and driving direction change of the navigated object, and the navigation action is an action to be executed by the navigated object according to the navigation route, the action is usually related to the driving direction, and since the navigated object is matched with the corresponding driving speed when executing the navigation action, the speed change, the driving direction change and the like can be mapped with the navigation action in advance, and in the navigation process, whether the navigated object notices the key information in the voice content is determined by matching the speed change, the driving change and the like in the behavior feedback information with the navigation action.
In an optional implementation manner of this embodiment, the step of determining whether the navigated object notices key information in the voice content based on a matching relationship between the behavior feedback information of the navigated object and the navigation action corresponding to the voice content further includes the following steps:
when the voice content relates to a navigation action for guiding the navigated object to turn or travel at a complex intersection, determining whether the navigated object notices key information in the voice content based on the travel speed of the navigated object.
In this optional implementation manner, the voice content corresponding to different voice broadcast occasions may be different, and the navigation action related to different voice content may also be different. The mapping relation between different navigation actions and behavior feedback information of the navigated object and the judgment conclusion of whether the navigated object notices the voice content related to the navigation action under the mapping relation can be set in advance.
In one embodiment, before a simple intersection and/or a complex intersection of a navigation route, the navigation server sets a corresponding voice broadcast time and voice content in the voice navigation content, where the voice content may relate to other navigation actions for guiding a navigated object to turn or to travel at the complex intersection (e.g., roundabout, intersection, etc.), and when the navigated object notices the voice content, the traveling speed of the navigated object usually changes, for example, the navigated object slows down to make a turn, etc., so that whether the navigated object notices key information in the voice content can be determined based on the speed change in the behavior feedback information of the navigated object.
In an optional implementation manner of this embodiment, when the voice content relates to a navigation action that guides the navigated object to turn or travel at a complex intersection, the step of determining whether the navigated object notices key information in the voice content based on the travel speed of the navigated object further includes the following steps:
and when the voice content relates to guiding the navigated object to turn, and the traveling speed of the navigated object is not reduced, determining that the navigated object does not notice key information in the voice content.
In this alternative implementation, as described above, before a simple intersection and/or a complex intersection of the navigation route, the navigation server may relate to the corresponding voice broadcast opportunity in the voice navigation content based on the voice content, where the voice content may relate to other navigation actions for guiding the navigated object to turn or to travel at the complex intersection (e.g., roundabout, intersection, etc.), and the navigated object usually has a change in its traveling speed when noticing the voice content, such as slowing down to make a turn, etc. That is, at a turn or a complex intersection, the normal driving feedback information of the navigated object at least includes a deceleration, and after the voice content is broadcasted, when the navigated object does not perform the deceleration, it can be determined that the navigated object does not notice the key information in the voice content, and the key information needs to be emphasized again at the next voice broadcasting occasion.
Fig. 2 shows a voice navigation diagram in an application scenario where a user drives an intersection near which a turn is required according to an embodiment of the present disclosure. As shown in fig. 2, when the user needs to turn near the intersection, the navigation terminal broadcasts the voice content that the place ahead needs to turn, and the driving behavior of the user that the navigation terminal obtained is: when the user does not decelerate or accelerate, the navigation terminal can judge that the attention of the user is possibly not concentrated enough and does not notice the voice content guiding the user to turn, so that the voice content continuously emphasizing the right turn is selected when the voice content is selected at the next voice broadcasting occasion, and other candidate voice contents are abandoned, thereby enhancing the reminding effect on the user.
In an optional implementation manner of this embodiment, when the voice content relates to a navigation action that guides the navigated object to turn or travel at a complex intersection, the step of determining whether the navigated object notices key information in the voice content based on the travel speed of the navigated object further includes the following steps:
when the voice content relates to a navigation action for guiding the navigated object to drive at a complex intersection, if the deceleration amplitude of the navigated object is greater than or equal to a preset speed value, determining that the navigated object notices the voice navigation content.
In this optional implementation, after the navigated object is adjacent to a complex intersection, such as a roundabout intersection or a three-way intersection, if the road shape of the intersection is unfamiliar and the key content in the intersection is not noticed after the voice content is broadcasted, a more probable behavior is a more drastic deceleration or parking, and at this time, it can be determined that the navigated object is hesitant to move in which direction because it is unclear. Therefore, the embodiment of the present disclosure may determine whether the navigated object notices the key information in the voice content based on whether the deceleration amplitude of the navigated object is greater than or equal to the preset speed value when the voice content relates to the navigation action guiding the navigated object to travel at the complex intersection based on the situation. If the deceleration amplitude is greater than or equal to the preset speed value, the navigated object can be considered to be aware of the key information in the speech content. At this time, the voice content corresponding to the next voice broadcasting opportunity can be dynamically adjusted, so that the key information is emphasized in the next voice content, and the navigated object is guided to correctly travel at the complex intersection according to the navigation route.
Fig. 3 shows a voice navigation diagram in an application scenario where a user drives near a complex intersection according to an embodiment of the present disclosure. As shown in fig. 3, when the user drives the vehicle and approaches a complex intersection (e.g., a three-way intersection), the navigation terminal broadcasts the voice content of "the front is the three-way intersection and please go to the middle lane", however, the driving behavior of the user acquired by the navigation terminal is: when the user slows down or stops the vehicle, the navigation terminal can judge that the user is hesitant and does not know which direction the user goes forward, so that the voice content emphasizing the user to go to the middle turnout can be added at the next voice broadcasting time when the voice content is selected.
In an optional implementation manner of this embodiment, the step of determining whether the navigated object notices key information in the voice content based on a matching relationship between the behavior feedback information of the navigated object and the navigation action corresponding to the voice content further includes the following steps:
when the voice content relates to a navigation action for guiding the navigated object to travel in a target lane, judging whether the navigated object is switched to the target lane based on the positioning information of the navigated object, and determining whether the navigated object notices key information in the voice content based on the judgment result.
In this alternative implementation, when the voice content relates to a navigation action for guiding the navigated object to travel on the target lane, for example, the navigated object travels at a high speed and is about to exit at a front highway exit, the voice content prompts the navigated object to merge to the rightmost lane in advance so as to exit smoothly near the highway exit. If the positioning information of the navigated object indicates that the navigated object still runs on the leftmost lane and does not slow down or change the direction (i.e. does not merge), it can be determined that the navigated object does not notice the previous voice content at this time, and therefore the voice content corresponding to the next voice broadcasting occasion can be dynamically adjusted, so as to emphasize the key information in the voice content for guiding the navigated object to merge into the lane in advance.
Fig. 4 shows a voice navigation diagram in an application scenario in which a user needs to switch lanes during driving according to an embodiment of the present disclosure. As shown in fig. 4, when the user drives a vehicle to travel in the left lane, the navigation terminal broadcasts the voice content of the right lane based on the voice broadcast time in the semantic navigation content at time t1, and when the navigation terminal determines that the user is still in the left lane according to the positioning information at time t2, the user can select the voice content continuously emphasizing the switch to the right lane for broadcast at the next voice broadcast time because the user is about to approach the intersection.
In an optional implementation manner of this embodiment, after the voice content is broadcasted, the step of dynamically adjusting the voice content at the next voice broadcasting opportunity based on the behavior feedback information of the navigated object further includes the following steps:
and after the voice content is broadcasted, adding a next voice broadcasting opportunity and a next voice content corresponding to the next voice broadcasting opportunity, wherein the added next voice content comprises key information in the voice content at least once.
In this optional implementation manner, after broadcasting a piece of voice content, the embodiment further obtains behavior feedback information of the navigated object, determines whether the navigated object notices the piece of voice content based on the behavior feedback information, and after not noticing the piece of voice content, may further dynamically adjust a next voice broadcasting opportunity and a corresponding voice content.
As described above, after planning the navigation route, the navigation server will push the navigation route and the voice navigation content to the navigation terminal. The voice navigation content comprises voice broadcasting occasions corresponding to the plurality of position points on the navigation route and at least one piece of voice content which can be broadcasted under the voice broadcasting occasions.
On the basis of the voice navigation content, the next voice broadcasting opportunity set by the navigation server and the corresponding voice content can be dynamically adjusted based on the behavior feedback information of the navigated object to the played voice content.
In some embodiments, the navigation terminal may adjust the voice content corresponding to the next voice broadcast occasion set by the navigation server based on the behavior feedback information of the navigated object, for example, a piece of voice content related to the key information in a plurality of candidate contents corresponding to the next voice broadcast occasion may be selected, or a piece of voice content emphasizing the key information may be added. In this embodiment, the next voice broadcast time may be preset by the navigation server.
In other embodiments, the navigation terminal may also adjust the next voice broadcast occasion and the corresponding voice content based on the behavior feedback information of the navigated object, that is, the navigation terminal may further add a next voice broadcast occasion and the voice content corresponding to the next voice broadcast occasion on the basis of the voice broadcast occasion set by the navigation server. The voice broadcast opportunity may be set based on the urgency of the navigation action related to the voice content missed by the navigated object, for example, in the case where the navigated object needs to perform the navigation action immediately, the voice broadcast opportunity may be set to a point in time closer to the current time, such as to perform immediately. The voice broadcast occasion may be set before a voice broadcast occasion closest to the current time in the voice navigation content. The dynamically added next voice content may be obtained by extracting key information of the voice content missed by the navigated object, and then obtaining the next voice content based on the key information, for example, the key information may be repeated at least once in the newly added next voice content.
Fig. 5 is a schematic diagram illustrating an application scenario of voice navigation according to an embodiment of the present disclosure. As shown in fig. 5, a user sends a navigation request from a current location to a destination to a navigation server through a mobile phone terminal, the navigation server plans a navigation route from the current location to the destination based on the navigation request, and pushes the planned navigation route and voice navigation content to the mobile phone terminal. After receiving the navigation route and the voice navigation content, the mobile phone terminal starts navigation based on the selection of a user, acquires the current position information of the vehicle in real time, selects at least one piece of voice content corresponding to the voice broadcasting time for broadcasting when the current position information of the vehicle is matched with the voice broadcasting time in the voice navigation content, and detects the speed change, the driving direction change and the like of the vehicle in real time after the broadcasting is completed. And judging whether the user driving the vehicle notices the key information or not based on whether the speed change, the driving direction change and the key information in the voice content are matched, and dynamically adjusting the voice broadcasting time and the corresponding voice content under the condition of not noticing so as to emphasize the key information ignored by the user again.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 6 shows a block diagram of a voice navigation device according to an embodiment of the present disclosure. The apparatus may be implemented as part or all of an electronic device through software, hardware, or a combination of both. As shown in fig. 6, the voice guidance apparatus includes:
an obtaining module 601 configured to obtain a navigation route and voice navigation content; the voice navigation content comprises voice content and voice broadcasting time;
the broadcasting module 602 is configured to broadcast the voice content based on the voice broadcasting opportunity in the process that the navigated object travels based on the navigation route;
the adjusting module 603 is configured to dynamically adjust the voice content of the next voice broadcasting opportunity based on the behavior feedback information of the navigated object after the voice content is broadcasted.
In this embodiment, the voice navigation apparatus is executed in the navigation terminal. The voice navigation can be understood as voice-controlled navigation, which adopts representative intelligent voice technologies such as voice recognition, voice encoding and decoding, and the like, and is generally applied to vehicles, user terminals, and the like. The voice content can be understood as that in the navigation process of the navigated object, information such as the driving direction, the lane and the like of the navigated object is reminded in a voice interaction mode, namely the voice content mainly relates to guiding a navigation action which needs to be executed by a user at the upcoming moment; the voice broadcasting time can be a fixed GPS position point, after the navigation route is planned, the voice content and the voice broadcasting time are basically fixed, the navigation terminal can be matched with the voice broadcasting time based on the current positioning information of the navigated object, and the voice content corresponding to the voice broadcasting time is broadcasted in a voice mode when the navigation terminal is matched with the voice broadcasting time. It can be understood that each voice broadcast occasion may correspond to one or more candidate broadcast contents, and the navigation terminal may select one or more voice contents to broadcast randomly or according to a preset rule.
The navigated object may include a person, and the navigation terminal may include, but is not limited to, a cell phone, ipad, computer, smart watch, vehicle, robot, etc. The navigation server plans a navigation route based on the starting point and the end point and then sends the navigation route and navigation content to the navigated object. In voice navigation, the navigation content may include, but is not limited to, voice content and a voice broadcast opportunity; the voice content and the voice broadcasting opportunity are correspondingly set.
And in the process that the navigated object runs based on the navigation route, the navigation terminal detects the current positioning information of the navigated object in real time, matches the positioning information with the voice broadcasting time in the voice navigation content, and broadcasts the corresponding voice content after the matching is successful.
In the embodiment of the present disclosure, each time after the voice content is reported, the navigation terminal further detects behavior feedback information of the navigated object, where the behavior feedback information may be understood as information fed back by the driving behavior of the navigated object, including but not limited to speed change information, driving direction change information, positioning information, and the like of the navigated object. The behavior feedback information of the navigated object can be detected by a vehicle driven by the navigated object or a portable handheld terminal. Whether the navigated object executes the corresponding behavior based on the currently played voice content can be judged through the behavior feedback information, if the corresponding behavior is executed, the navigated object can be considered to receive the voice content, otherwise, the navigated object can be considered not to receive the voice content. In the embodiment of the present disclosure, whether the navigated object receives the voice content may be understood as whether the user notices the current voice content and performs a corresponding action based on the voice content.
In some embodiments, the user may not be focused enough while driving, and may not be able to extract valid information from the voice content played by the navigated object, thereby easily causing a yaw to occur.
Therefore, the voice content of the next voice broadcasting opportunity can be dynamically adjusted based on the behavior feedback information of the navigated object, and the key information ignored by the user can be emphasized in the adjusted voice content, so that the yaw rate of the user is reduced by repeatedly reminding the user of the key information.
The following examples illustrate: if the corresponding voice content is played at the first voice broadcasting opportunity, the navigated object does not show corresponding driving behaviors, such as speed reduction, direction change and the like, based on the voice content; at this time, the voice content of the next voice broadcasting opportunity in the navigation can be adjusted, and if the candidate content containing the key information is selected from the plurality of voice contents corresponding to the next voice broadcasting opportunity in the predetermined voice navigation content for repeated broadcasting, other candidate contents are not broadcasted any more.
In the voice navigation, after the voice content corresponding to one voice broadcasting opportunity is broadcasted in the voice navigation process based on the navigation route and the voice navigation content sent by the navigation server, the behavior feedback information of the navigated object is detected in real time, and the voice content corresponding to the next voice broadcasting opportunity is dynamically adjusted based on the behavior feedback information. Through the embodiment of the disclosure, the problem that the user can easily yaw by adopting the existing fixed voice broadcasting opportunity and the fixed voice content under the condition that the attention of the user is not concentrated in the driving process can be avoided, the yaw rate of the user can be reduced, and the navigation experience of the user is improved.
In an optional implementation manner of this embodiment, the adjusting module includes:
the first determining submodule is configured to detect behavior feedback information of the navigated object after the voice content is broadcasted;
a first judging sub-module configured to judge whether the navigated object notices key information in the voice content based on the behavior feedback information;
and the selection submodule is configured to select the voice content including the key information for broadcasting at the next voice broadcasting opportunity after the fact that the voice content is not noticed by the navigated object is determined.
In the optional implementation mode, in a voice navigation scene, after the navigation server plans the navigation route, the planned navigation route and the corresponding voice navigation content are sent to the navigation terminal of the navigated object, the navigation terminal provides voice navigation service for the navigated object based on the real-time positioning information, the navigation route and the voice navigation content of the navigated object, the navigation terminal matches the real-time positioning information of the navigated object with the position point in the navigation route, and when the position point with the voice content is matched, the corresponding voice content is broadcasted in a voice mode.
The embodiment of the present disclosure considers that after the broadcasting of the voice content, the navigated object may not really receive the voice content due to the lack of attention and the like, so the behavior feedback information of the navigated object can be detected after the broadcasting of the voice content, and the behavior feedback information can include, but is not limited to, the speed change, the driving direction change and the like of the navigated object.
After determining the behavior feedback information of the navigated object, it can be determined whether the navigated object notices key information in the voice content based on the behavior feedback information, such as prompting the user to exit at a next highway junction and prompting the user to change lanes in advance in the voice content; however, the detected behavior feedback information indicates that the navigated object does not make any obvious behavior change after the voice content is played, such as no speed reduction, no direction change, etc., and at this time, it can be determined that the navigated object does not notice the key information in the voice content; conversely, if the detected behavior feedback information indicates that the navigated object has made a corresponding behavior change, such as a speed reduction, a direction change, etc., after the voice content is played, it can be determined that the navigated object has noticed key information in the voice content.
After the navigated object is determined not to notice the voice content, the voice content which only comprises the key information or can emphasize the key information can be selected for broadcasting at the next voice broadcasting occasion, so that the navigated object is reminded to pay attention to the key information. After the navigated object is determined to pay attention to the voice content, the voice content to be broadcasted can be selected at the next voice broadcasting occasion according to a conventional manner, and the voice content only including the key information or capable of emphasizing the key information is not necessarily selected, or even the voice content not including the key information can be selected, which is specifically determined according to the set selection manner of the voice content, and is not specifically limited herein.
In an optional implementation manner of this embodiment, the first determining sub-module includes:
and the second determining sub-module is configured to determine whether the navigated object notices key information in the voice content based on the matching relation between the behavior feedback information of the navigated object and the navigation action corresponding to the voice content after the voice content is broadcasted.
In this optional implementation manner, as described above, the navigation server pushes the navigation route and the voice navigation content to the navigation terminal, where the voice navigation content includes the voice content and the voice broadcast time for playing the voice content.
After the voice content is played at the voice broadcasting opportunity, whether the voice content at the next voice broadcasting opportunity is adjusted or not is determined based on the behavior feedback information of the navigated object.
In this embodiment, the navigated object may determine whether the navigated object notices key information in the voice content based on a matching relationship between the behavior feedback information of the navigated object and the navigation actions involved in the voice content. The behavior feedback information can be measured from at least two aspects, such as speed change and driving direction change of the navigated object, and the navigation action is an action to be executed by the navigated object according to the navigation route, the action is usually related to the driving direction, and since the navigated object is matched with the corresponding driving speed when executing the navigation action, the speed change, the driving direction change and the like can be mapped with the navigation action in advance, and in the navigation process, whether the navigated object notices the key information in the voice content is determined by matching the speed change, the driving change and the like in the behavior feedback information with the navigation action.
In an optional implementation manner of this embodiment, the second determining sub-module includes:
a third determination submodule configured to determine whether the navigated object notices key information in the voice content based on a travel speed of the navigated object when the voice content relates to a navigation action that guides the navigated object to turn or travel at a complex intersection.
In this optional implementation manner, the voice content corresponding to different voice broadcast occasions may be different, and the navigation action related to different voice content may also be different. The mapping relation between different navigation actions and behavior feedback information of the navigated object and the judgment conclusion of whether the navigated object notices the voice content related to the navigation action under the mapping relation can be set in advance.
In one embodiment, before a simple intersection and/or a complex intersection of a navigation route, the navigation server sets a corresponding voice broadcast time and voice content in the voice navigation content, where the voice content may relate to other navigation actions for guiding a navigated object to turn or to travel at the complex intersection (e.g., roundabout, intersection, etc.), and when the navigated object notices the voice content, the traveling speed of the navigated object usually changes, for example, the navigated object slows down to make a turn, etc., so that whether the navigated object notices key information in the voice content can be determined based on the speed change in the behavior feedback information of the navigated object.
In an optional implementation manner of this embodiment, the third determining sub-module includes:
a fourth determination sub-module configured to determine that the navigated object is not paying attention to key information in the voice content, in a case where the traveling speed of the navigated object is not reduced while the voice content is involved in guiding the navigated object to turn.
In this alternative implementation, as described above, before a simple intersection and/or a complex intersection of the navigation route, the navigation server may relate to the corresponding voice broadcast opportunity in the voice navigation content based on the voice content, where the voice content may relate to other navigation actions for guiding the navigated object to turn or to travel at the complex intersection (e.g., roundabout, intersection, etc.), and the navigated object usually has a change in its traveling speed when noticing the voice content, such as slowing down to make a turn, etc. That is, at a turn or a complex intersection, the normal driving feedback information of the navigated object at least includes a deceleration, and after the voice content is broadcasted, when the navigated object does not perform the deceleration, it can be determined that the navigated object does not notice the key information in the voice content, and the key information needs to be emphasized again at the next voice broadcasting occasion.
In an optional implementation manner of this embodiment, the third determining sub-module includes:
a fifth determining sub-module, configured to determine that the navigated object notices the voice navigation content if the deceleration amplitude of the navigated object is greater than or equal to a preset speed value when the voice content relates to a navigation action for guiding the navigated object to drive at a complex intersection.
In this optional implementation, after the navigated object is adjacent to a complex intersection, such as a roundabout intersection or a three-way intersection, if the road shape of the intersection is unfamiliar and the key content in the intersection is not noticed after the voice content is broadcasted, a more probable behavior is a more drastic deceleration or parking, and at this time, it can be determined that the navigated object is hesitant to move in which direction because it is unclear. Therefore, the embodiment of the present disclosure may determine whether the navigated object notices the key information in the voice content based on whether the deceleration amplitude of the navigated object is greater than or equal to the preset speed value when the voice content relates to the navigation action guiding the navigated object to travel at the complex intersection based on the situation. If the deceleration amplitude is greater than or equal to the preset speed value, the navigated object can be considered to notice key information in the speech content. At this time, the voice content corresponding to the next voice broadcasting opportunity can be dynamically adjusted, so that the key information is emphasized in the next voice content, and the navigated object is guided to correctly travel at the complex intersection according to the navigation route.
In an optional implementation manner of this embodiment, the second determining sub-module includes:
a second determination sub-module configured to, when the voice content relates to a navigation action that guides the navigated object to travel in a target lane, determine whether the navigated object switches to the target lane based on the positioning information of the navigated object, and determine whether the navigated object notices key information in the voice content based on the determination result.
In this alternative implementation, when the voice content relates to a navigation action for guiding the navigated object to travel on the target lane, for example, the navigated object travels at a high speed and is about to exit at a front highway exit, the voice content prompts the navigated object to merge to the rightmost lane in advance so as to exit smoothly near the highway exit. If the positioning information of the navigated object indicates that the navigated object still runs on the leftmost lane and does not slow down or change the direction (i.e. does not merge), it can be determined that the navigated object does not notice the previous voice content at this time, and therefore the voice content corresponding to the next voice broadcasting occasion can be dynamically adjusted, so as to emphasize the key information in the voice content for guiding the navigated object to merge into the lane in advance.
In an optional implementation manner of this embodiment, the adjusting module includes:
and the adding submodule is configured to add a next voice broadcasting opportunity and next voice content corresponding to the next voice broadcasting opportunity after the voice content is broadcasted, and the added next voice content at least comprises key information in the voice content once.
In this optional implementation manner, after broadcasting a piece of voice content, the embodiment further obtains behavior feedback information of the navigated object, determines whether the navigated object notices the piece of voice content based on the behavior feedback information, and after not noticing the piece of voice content, may further dynamically adjust a next voice broadcasting opportunity and a corresponding voice content.
As described above, after planning the navigation route, the navigation server will push the navigation route and the voice navigation content to the navigation terminal. The voice navigation content comprises voice broadcasting occasions corresponding to the plurality of position points on the navigation route and at least one piece of voice content which can be broadcasted under the voice broadcasting occasions.
On the basis of the voice navigation content, the next voice broadcasting opportunity set by the navigation server and the corresponding voice content can be dynamically adjusted based on the behavior feedback information of the navigated object to the played voice content.
In some embodiments, the navigation terminal may adjust the voice content corresponding to the next voice broadcast occasion set by the navigation server based on the behavior feedback information of the navigated object, for example, a piece of voice content related to the key information in a plurality of candidate contents corresponding to the next voice broadcast occasion may be selected, or a piece of voice content emphasizing the key information may be added. In this embodiment, the next voice broadcast time may be preset by the navigation server.
In other embodiments, the navigation terminal may also adjust the next voice broadcast occasion and the corresponding voice content based on the behavior feedback information of the navigated object, that is, the navigation terminal may further add a next voice broadcast occasion and the voice content corresponding to the next voice broadcast occasion on the basis of the voice broadcast occasion set by the navigation server. The voice broadcast opportunity may be set based on the urgency of the navigation action related to the voice content missed by the navigation object, for example, in the case where the navigation action needs to be immediately performed by the navigation object, the voice broadcast opportunity may be set to a point in time closer to the current time, such as immediate execution or the like. The voice broadcast occasion may be set before a voice broadcast occasion closest to the current time in the voice navigation content. The dynamically added next voice content may be obtained by extracting key information of the voice content missed by the navigated object, and then obtaining the next voice content based on the key information, for example, the key information may be repeated at least once in the newly added next voice content.
Fig. 7 is a schematic structural diagram of an electronic device suitable for implementing a voice navigation method according to an embodiment of the present disclosure.
As shown in fig. 7, electronic device 700 includes a processing unit 701, which may be implemented as a CPU, GPU, FPGA, NPU, or other processing unit. The processing unit 701 may execute various processes in the embodiment of any one of the methods described above of the present disclosure according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing unit 701, the ROM702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that the computer program read out therefrom is mounted in the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, any of the methods described above with reference to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing any of the methods of the embodiments of the present disclosure. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A voice navigation method, comprising:
acquiring a navigation route and voice navigation content; the voice navigation content comprises voice content and voice broadcasting time;
broadcasting the voice content based on the voice broadcasting opportunity in the process that the navigated object runs based on the navigation route;
after the voice content is broadcasted, dynamically adjusting the voice content of the next voice broadcasting opportunity based on the behavior feedback information of the navigated object.
2. The method of claim 1, wherein dynamically adjusting the voice content of the next voice broadcasting occasion based on the behavior feedback information of the navigated object after broadcasting the voice content comprises:
after the voice content is broadcasted, detecting behavior feedback information of the navigated object;
judging whether the navigated object notices key information in the voice content or not based on the behavior feedback information;
and after the fact that the navigated object does not pay attention to the voice content is determined, selecting the voice content comprising the key information at the next voice broadcasting opportunity to broadcast.
3. The method of claim 2, wherein determining whether the navigated object is aware of key information in the voice content based on the behavioral feedback information comprises:
after the voice content is broadcasted, whether the navigated object notices key information in the voice content is determined based on the matching relation between the behavior feedback information of the navigated object and the navigation action corresponding to the voice content.
4. The method of claim 3, wherein determining whether the navigated object notices key information in the voice content based on a matching relationship between behavior feedback information of the navigated object and navigation actions corresponding to the voice content comprises:
when the voice content relates to a navigation action for guiding the navigated object to turn or travel at a complex intersection, determining whether the navigated object notices key information in the voice content based on the travel speed of the navigated object.
5. The method of claim 4, wherein determining whether the navigated object is aware of critical information in the voice content based on the travel speed of the navigated object when the voice content relates to a navigation action that guides the navigated object to an upcoming turn or travel at a complex intersection comprises:
and when the voice content relates to guiding the navigated object to turn, and the traveling speed of the navigated object is not reduced, determining that the navigated object does not notice key information in the voice content.
6. The method of claim 4, wherein determining whether the navigated object is aware of critical information in the voice content based on the travel speed of the navigated object when the voice content relates to a navigation action that guides the navigated object to an upcoming turn or travel at a complex intersection comprises:
when the voice content relates to a navigation action for guiding the navigated object to drive at a complex intersection, if the deceleration amplitude of the navigated object is greater than or equal to a preset speed value, determining that the navigated object notices the voice navigation content.
7. The method according to any one of claims 3-6, wherein determining whether the navigated object notices key information in the voice content based on a matching relationship between behavior feedback information of the navigated object and navigation actions corresponding to the voice content comprises:
when the voice content relates to a navigation action for guiding the navigated object to travel in a target lane, judging whether the navigated object is switched to the target lane based on the positioning information of the navigated object, and determining whether the navigated object notices key information in the voice content based on the judgment result.
8. The method according to any one of claims 1 to 6, wherein after broadcasting the voice content, dynamically adjusting the voice content of the next voice broadcasting occasion based on the behavior feedback information of the navigated object comprises:
and after the voice content is broadcasted, adding a next voice broadcasting opportunity and a next voice content corresponding to the next voice broadcasting opportunity, wherein the added next voice content at least comprises key information in the voice content once.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory, wherein the processor executes the computer program to implement the method of any of claims 1-8.
10. A computer program product comprising computer instructions, wherein the computer instructions, when executed by a processor, implement the method of any one of claims 1-8.
CN202210249291.2A 2022-03-14 2022-03-14 Voice navigation method, electronic equipment and computer program product Pending CN114812591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210249291.2A CN114812591A (en) 2022-03-14 2022-03-14 Voice navigation method, electronic equipment and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210249291.2A CN114812591A (en) 2022-03-14 2022-03-14 Voice navigation method, electronic equipment and computer program product

Publications (1)

Publication Number Publication Date
CN114812591A true CN114812591A (en) 2022-07-29

Family

ID=82528336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210249291.2A Pending CN114812591A (en) 2022-03-14 2022-03-14 Voice navigation method, electronic equipment and computer program product

Country Status (1)

Country Link
CN (1) CN114812591A (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5410486A (en) * 1992-07-20 1995-04-25 Toyota Jidosha K.K. Navigation system for guiding vehicle by voice
JP2007271378A (en) * 2006-03-30 2007-10-18 Denso Corp Onboard navigation system
KR20090019203A (en) * 2007-08-20 2009-02-25 에스케이 텔레콤주식회사 Method and navigation terminal for controlling volume of voice guidance message, and navigation service system therefor
KR20090126145A (en) * 2008-06-03 2009-12-08 엔에이치엔(주) Navigation terminal for outputting variable guidance audio information based on vehicles speed and method thereof
KR20100013840A (en) * 2008-08-01 2010-02-10 (주)디코인 Navigation device for vehicles with vocal and graphical guide message providing function and method there of
JP2011232097A (en) * 2010-04-26 2011-11-17 Alpine Electronics Inc Navigation device, voice guidance method and voice guidance program
CN103575288A (en) * 2012-07-25 2014-02-12 昆达电脑科技(昆山)有限公司 Navigation method and device thereof for video broadcast situations
CN103776460A (en) * 2014-01-27 2014-05-07 上海安吉星信息服务有限公司 Voice broadcasting method of navigation system
CN106767881A (en) * 2016-12-01 2017-05-31 深圳市金立通信设备有限公司 A kind of navigation control method and terminal
CN107179089A (en) * 2017-05-22 2017-09-19 成都宏软科技实业有限公司 The prevention method and system for preventing navigation crossing from missing during a kind of interactive voice
CN107202591A (en) * 2017-06-05 2017-09-26 安徽同帆新能源机车科技有限公司 Interactive voice automated navigation system based on electric bicycle
CN107289964A (en) * 2016-03-31 2017-10-24 高德信息技术有限公司 One kind navigation voice broadcast method and device
CN108240822A (en) * 2016-12-27 2018-07-03 沈阳美行科技有限公司 Method and device for prompting violation electronic eye
CN109540161A (en) * 2018-11-08 2019-03-29 东软睿驰汽车技术(沈阳)有限公司 A kind of air navigation aid and device of vehicle
CN111220172A (en) * 2018-11-23 2020-06-02 北京嘀嘀无限科技发展有限公司 Navigation voice broadcasting method and system
CN111238512A (en) * 2018-11-29 2020-06-05 上海博泰悦臻网络技术服务有限公司 Navigation lane reminding method and system and electronic equipment
US20200312139A1 (en) * 2019-03-28 2020-10-01 Honda Motor Co., Ltd. Driving assistance system for vehicle
CN112601933A (en) * 2018-09-27 2021-04-02 宝马股份公司 Providing vehicle occupants with interactive feedback for voice broadcasts
CN112798003A (en) * 2020-12-30 2021-05-14 腾讯科技(深圳)有限公司 Navigation prompt information generation method, prompting method, device and equipment
CN113532467A (en) * 2020-04-14 2021-10-22 阿里巴巴集团控股有限公司 Voice broadcasting method, device and equipment
CN113739796A (en) * 2020-05-28 2021-12-03 阿里巴巴集团控股有限公司 Information prompting method and device, navigation server, navigation terminal and storage medium

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5410486A (en) * 1992-07-20 1995-04-25 Toyota Jidosha K.K. Navigation system for guiding vehicle by voice
JP2007271378A (en) * 2006-03-30 2007-10-18 Denso Corp Onboard navigation system
KR20090019203A (en) * 2007-08-20 2009-02-25 에스케이 텔레콤주식회사 Method and navigation terminal for controlling volume of voice guidance message, and navigation service system therefor
KR20090126145A (en) * 2008-06-03 2009-12-08 엔에이치엔(주) Navigation terminal for outputting variable guidance audio information based on vehicles speed and method thereof
KR20100013840A (en) * 2008-08-01 2010-02-10 (주)디코인 Navigation device for vehicles with vocal and graphical guide message providing function and method there of
JP2011232097A (en) * 2010-04-26 2011-11-17 Alpine Electronics Inc Navigation device, voice guidance method and voice guidance program
CN103575288A (en) * 2012-07-25 2014-02-12 昆达电脑科技(昆山)有限公司 Navigation method and device thereof for video broadcast situations
CN103776460A (en) * 2014-01-27 2014-05-07 上海安吉星信息服务有限公司 Voice broadcasting method of navigation system
CN107289964A (en) * 2016-03-31 2017-10-24 高德信息技术有限公司 One kind navigation voice broadcast method and device
CN106767881A (en) * 2016-12-01 2017-05-31 深圳市金立通信设备有限公司 A kind of navigation control method and terminal
CN108240822A (en) * 2016-12-27 2018-07-03 沈阳美行科技有限公司 Method and device for prompting violation electronic eye
CN107179089A (en) * 2017-05-22 2017-09-19 成都宏软科技实业有限公司 The prevention method and system for preventing navigation crossing from missing during a kind of interactive voice
CN107202591A (en) * 2017-06-05 2017-09-26 安徽同帆新能源机车科技有限公司 Interactive voice automated navigation system based on electric bicycle
CN112601933A (en) * 2018-09-27 2021-04-02 宝马股份公司 Providing vehicle occupants with interactive feedback for voice broadcasts
CN109540161A (en) * 2018-11-08 2019-03-29 东软睿驰汽车技术(沈阳)有限公司 A kind of air navigation aid and device of vehicle
CN111220172A (en) * 2018-11-23 2020-06-02 北京嘀嘀无限科技发展有限公司 Navigation voice broadcasting method and system
CN111238512A (en) * 2018-11-29 2020-06-05 上海博泰悦臻网络技术服务有限公司 Navigation lane reminding method and system and electronic equipment
US20200312139A1 (en) * 2019-03-28 2020-10-01 Honda Motor Co., Ltd. Driving assistance system for vehicle
CN113532467A (en) * 2020-04-14 2021-10-22 阿里巴巴集团控股有限公司 Voice broadcasting method, device and equipment
CN113739796A (en) * 2020-05-28 2021-12-03 阿里巴巴集团控股有限公司 Information prompting method and device, navigation server, navigation terminal and storage medium
CN112798003A (en) * 2020-12-30 2021-05-14 腾讯科技(深圳)有限公司 Navigation prompt information generation method, prompting method, device and equipment

Similar Documents

Publication Publication Date Title
EP1975568A2 (en) Crossroad guide method in a navigation system
US20220194433A1 (en) Driving control device and vehicle behavior suggestion device
KR20060040010A (en) Method for controlling output of audio signal for route guidance in navigation system
US10900802B2 (en) Map based navigation method, apparatus, storage medium and equipment
CN115497331B (en) Parking method, device and equipment and vehicle
CN113340318A (en) Vehicle navigation method, device, electronic equipment and storage medium
CN110567476A (en) Navigation method and device
CN111337045A (en) Vehicle navigation method and device
CN114812591A (en) Voice navigation method, electronic equipment and computer program product
US9714839B2 (en) Apparatus and method for use with a navigation system
CN111024111A (en) Navigation method and electronic equipment
CN111341134A (en) Lane line guide prompting method, cloud server and vehicle
CN114413926B (en) Map display method based on mapbox engine osm data and high-precision data
CN115662172A (en) Traffic signal lamp running state determining method and device and electronic equipment
US8423282B2 (en) Road guidance service method and navigation system for implementing the method
CN111243313B (en) Roundabout navigation method and device, terminal equipment and storage medium
CN113739800A (en) Navigation guiding method and computer program product
CN107154964B (en) Mobile terminal information pushing method and device
CN115164926A (en) Navigation information broadcasting method, electronic equipment and storage medium
JP2010020564A (en) Information notification device, control method therefor and control program thereof
CN115824239A (en) Recommended lane determining method and device, electronic equipment and computer program product
CN114863673B (en) Voice broadcasting method, device and system and position-based service providing method
CN117075350B (en) Driving interaction information display method and device, storage medium and electronic equipment
KR100448387B1 (en) Method and system for providing road information for a vehicle
CN113781765B (en) Information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination