WO2021232726A1 - 一种导航音频的播放方法、装置、设备和计算机存储介质 - Google Patents
一种导航音频的播放方法、装置、设备和计算机存储介质 Download PDFInfo
- Publication number
- WO2021232726A1 WO2021232726A1 PCT/CN2020/131319 CN2020131319W WO2021232726A1 WO 2021232726 A1 WO2021232726 A1 WO 2021232726A1 CN 2020131319 W CN2020131319 W CN 2020131319W WO 2021232726 A1 WO2021232726 A1 WO 2021232726A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio
- navigation
- navigation audio
- user
- estimated arrival
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3697—Output of additional, non-guidance related information, e.g. low fuel level
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3629—Guidance using speech or audio output, e.g. text-to-speech
Definitions
- This application relates to the field of computer application technology, in particular to the field of big data technology.
- the present application provides a navigation audio playback method, device, equipment and computer storage medium to solve the above technical problems.
- this application provides a navigation audio playback method, which includes:
- the corresponding navigation audio is ordered and reported at the broadcasting position, and the non-navigation audio to be played is selected according to the gap time between the broadcasting position points.
- this application provides a navigation audio playback device, which includes:
- the navigation determination unit is used to determine the navigation audio and the broadcast location point to be broadcast in the navigation route
- the broadcast processing unit is configured to order and broadcast the corresponding navigation audio at the broadcast location, and select the non-navigation audio to be played according to the gap time between the broadcast location points.
- this application provides an electronic device, including:
- At least one processor At least one processor
- a memory communicatively connected with the at least one processor; wherein,
- the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method described in any one of the foregoing.
- the present application provides a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute any of the methods described above.
- Figure 1 shows an exemplary system architecture to which embodiments of the present invention can be applied
- Figure 2 is a flow chart of the main method provided by an embodiment of the application.
- FIG. 3 is a flowchart of a method for determining non-navigation audio provided by an embodiment of the application
- FIG. 4 is a structural diagram of a navigation audio playback device provided by an embodiment of the application.
- FIG. 5 is an example diagram of broadcasting in a navigation route provided by an embodiment of the application.
- Fig. 6 is a block diagram of an electronic device used to implement an embodiment of the present application.
- Figure 1 shows an exemplary system architecture to which embodiments of the present invention can be applied.
- the system architecture may include terminal devices 101 and 102, a network 103, and a server 104.
- the network 103 is used to provide a medium for communication links between the terminal devices 101 and 102 and the server 104.
- the network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
- the user can use the terminal devices 101 and 102 to interact with the server 104 through the network 103.
- Various applications may be installed on the terminal devices 101 and 102, such as map applications, voice interactive applications, web browser applications, communication applications, and so on.
- the terminal devices 101 and 102 may be various electronic devices, including but not limited to smart phones, tablet computers, smart speakers, smart wearable devices, and so on.
- the navigation audio playback device provided by the present invention can be set up and run in the server 104 mentioned above, and can also be set up and run in the terminal device 101 or 102. It can be implemented as multiple software or software modules (for example, to provide distributed services), or as a single software or software module, which is not specifically limited here.
- the navigation audio playback device is set up and running on the server 104, then the navigation audio playback device uses the method provided by the embodiment of the present invention to determine the navigation audio and non-navigation audio to be broadcast in the navigation route, and provide the navigation audio and non-navigation audio to the terminal device 101 or 102 to play.
- the server 104 may be a single server or a server group composed of multiple servers. It should be understood that the numbers of terminal devices, networks, and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks, and servers according to implementation needs.
- Figure 2 is a flowchart of the main method provided by an embodiment of the application.
- the playback of navigation audio and the playback of non-navigation audio are no longer controlled and played by different applications, but are unified by the navigation audio broadcasting device. And play, for example, it is controlled and played by a map application with navigation function.
- the method may include the following steps:
- the navigation audio and the broadcast location point to be broadcast in the navigation route are determined.
- the route planning is performed based on the starting point location, the ending point location, and the travel mode input by the user, and the route planning result is returned to the user.
- the user can select a route from which to navigate.
- This application is executed when the user selects a route for navigation, and the route selected by the user is the navigation route. For a navigation route, there will be multiple broadcast position points for navigation audio playback.
- the broadcast location point refers to the specific geographic location where the navigation audio is broadcast, for example, at a certain location before the intersection. Need to turn or perform navigation audio, broadcast the speed limit requirements of the road section at the entrance of a certain road section, and so on.
- the navigation audio broadcast in a navigation route is often intensive, but not all users need these navigation audios.
- a navigation route if the user is familiar with the navigation route, then Often only some of the key navigation audio is needed. But if the user is not familiar with the navigation route, then more navigation content is needed. Therefore, as a preferred embodiment, according to the user's familiarity with the navigation route and the importance of the navigation audio, the navigation audio with the importance level matching the above-mentioned familiarity can be selected from the navigation audio of the navigation route as the navigation audio to be broadcast. , So as to ensure the navigation broadcast that matches the user's navigation needs during the navigation process.
- the user's familiarity with the navigation route can be determined based on the number of times the user has navigated the route in history. If the number of times the user has navigated the route in history exceeds the preset number threshold, it can be considered that the user is familiar with the navigation route.
- the navigation broadcast content mainly focuses on turning points, and each intersection is reminded only once, and the electronic eye information is not prompted.
- user B is not familiar with it.
- the navigation broadcast content needs to be accompanied by detailed instructions to assist judgment and broadcast electronic eye information.
- the corresponding navigation audio is ordered at the broadcasting position, and the non-navigation audio to be played is selected according to the gap time between the broadcasting position points.
- the navigation device plays non-navigation audio during the idle time between the navigation audio.
- determine all the non-navigation audio sequences to be played in the entire navigation route and then play them according to the determined sequence.
- This is equivalent to a pre-determined audio sequence, but often during the user's journey, changes in road conditions, changes in the user's personal speed, staying, etc. will cause the time to reach the navigation audio broadcast position to change, that is, the above-mentioned gap time If there is a change, then the predetermined sequence is no longer suitable and needs to be adjusted again. Therefore, another preferred embodiment can be selected, that is, every time a piece of navigation audio or non-navigation audio is played, the next non-navigation audio to be played is determined in real time.
- the location of the user when the current navigation audio or non-navigation audio is played is determined.
- the i+1-th audio to be played can be determined, where the audio can be navigation audio or non-navigation audio.
- the location of the user when the current i-th audio is played can be estimated based on the remaining duration of the i-th audio and the current speed of the user. i is a positive integer.
- the position of the user when the current i-th audio is played is the current position of the user.
- each audio to be played is determined one by one. For example, if the first item is navigation audio, use the first navigation audio as the current navigation audio to determine whether the next navigation audio or non-navigation audio is played. If the next non-navigation audio is played, which non-navigation audio will be played. After determining the next audio to be played, use the next audio as the current audio to continue to determine whether the next navigation audio or non-navigation audio will be played. If the next non-navigation audio is played, which non-navigation audio will be played. And so on. For each piece of current audio, the location of the user when the current audio is played is determined, which can be estimated based on the average speed of the user's travel mode and the playing time of each piece of audio.
- the estimated arrival time to reach the broadcast location point of the next navigation audio is determined.
- the estimated arrival time (ETA, Estimated Time) of reaching the next navigation audio broadcast location can be estimated based on the distance between the user’s location and the next navigation audio broadcast location, the user’s speed, and road conditions. of Arrival).
- the specific implementation of this part can use any ETA estimation method in the prior art, which will not be described in detail here.
- the next non-navigation audio to be played is selected according to the estimated arrival time.
- the core is that the selected non-navigation audio needs to be played within the estimated arrival time, or the core content of the non-navigation audio needs to be played within the estimated arrival time.
- the core content of the non-navigation audio refers to the part that can embody the theme of the non-navigation audio, and the user will generally understand the content of the audio after listening to the core content.
- the core content of a news audio is the part that can reflect the theme of the news
- the core content of a cross talk audio is the part that contains the main content of the cross talk
- the core content of a song audio is the part that contains the main song of the song.
- the core content of a joke is the part of the joke that contains the joke.
- Non-navigation audio can be obtained and selected from an audio pool, where the audio pool can be an audio pool maintained by a service provider of a map application, or an audio pool provided by a service provider of a third-party application with which it has a cooperative relationship.
- the audio pool contains various types of non-navigation audio, including but not limited to news, novels, music, songs, jokes, and so on.
- the audio pool also maintains the audio duration and core content identifiers of each non-navigation audio.
- the core content identification refers to the identification of the start time and the end time of the core content of the non-navigation audio, and the playback duration of the core content can be determined by the identification.
- non-navigation audio can also be selected based on the user's playback needs. Several preferred options are provided below:
- Method 1 If the estimated time of arrival is greater than the preset first time threshold, the next navigation audio is considered to be farther away, and the user's demand priority selection method is adopted. That is, select the non-navigation audio required by the user from the non-navigation audio whose audio duration or the playback duration of the core content is less than the estimated arrival time.
- the next navigation audio is considered to be relatively close, and the time priority selection method is adopted. That is, from the non-navigation audio whose audio duration or the playback duration of the core content is less than and close to the estimated arrival time, select the non-navigation audio required by the user, where proximity refers to the difference between the estimated arrival time and the audio duration or the playback time of the core content The difference is less than the second duration threshold.
- the estimated arrival time is less than the second time threshold, it is considered that the next navigation audio is about to be played, and no non-navigation audio is selected to be inserted.
- the foregoing first duration threshold is greater than the second duration threshold.
- the first duration threshold may be 4 minutes
- the second duration threshold may be 10 seconds. If after playing a navigation audio or non-navigation audio, it is determined that the estimated time of arrival of the next navigation audio is 6 minutes, which is greater than 4 minutes, then the user needs priority selection method can be adopted. You can filter out the non-navigation audio that has been played for more than 6 minutes from the audio source (including various non-navigation audio), and then select the audio that best meets the user's needs from the remaining non-navigation audio.
- the time priority selection method is adopted. Find out the non-navigation audio whose playback time or core content is between 2 minutes, 50 seconds and 3 minutes from the audio source, and then further determine the non-navigation audio that meets the user's needs from these non-navigation audios.
- the non-navigation audio is no longer selected as the next audio, but the next navigation audio is waited for.
- a non-navigation audio with the most appropriate duration is directly selected from the non-navigation audio required by the user. For example, after playing a navigation audio or non-navigation audio, it is determined that the estimated arrival time of the next navigation audio is 5 minutes, which is greater than 10 seconds. Then, determine the non-navigation audio required by all users from the audio source, and then select the non-navigation audio whose audio duration or core content playing duration is less than and closest to 5 minutes, such as a news of 4 minutes and 55 seconds.
- the non-navigation audio is no longer selected as the next audio, but the next navigation audio is waited for.
- the non-navigation audio required by the user it can be determined according to at least one of the destination, environmental conditions, route conditions, user driving conditions, and user preference information.
- the destination is mainly the type information of the destination, such as company, home, supermarket, transportation hub, scenic spot, etc. For example, users prefer warm music when they go home, news audio when they go to the company, and cheerful music when they go to scenic spots, and so on.
- the environmental conditions can include the current time, date, whether it is a holiday or a working day, weather, and so on. These environmental conditions may have an impact on the audio demanded by users. For example, if the weather is clear, users tend to prefer warm music, and users tend to prefer warm music if the weather is gloomy. For another example, users on holidays prefer the audio of songs, and users on weekdays prefer the audio of news. and many more.
- the route status may include the congestion state, road grade, length, etc. of the current route. These conditions may also affect the audio demanded by the user. For example, when the user is in a congested state, the user prefers soothing music or news about road conditions. For another example, for a flat and long route, users prefer novel audio. and many more.
- the user's driving status may include the user's driving time, driving mileage, congestion status of the road section, and so on. These conditions reflect user fatigue to a certain extent, and also affect the audio demand of users. For example, when the user has been driving for a long time or driving a long mileage, the user needs to cheer up, and even more needs music, rock music, etc. that can invigorate the spirit.
- the user preference information may include the user's preference tag for the audio type, preference vector, etc. For example, the user prefers news type audio, or the user prefers jazz music, and so on.
- the user preference information can be determined by a tag set by the user, or can be determined based on the user's behavior feedback on the audio file (for example, the behavior of switching audio files, the behavior of collecting audio files, the behavior of listening to complete, etc.).
- At least one of the above factors can be combined to determine the non-navigation audio required by the user.
- the switching prompt sound can be played between the non-navigation audio and the navigation audio, that is, when the non-navigation audio is switched to the navigation audio playback, the switching prompt sound can be added to give the user anticipation and avoid the user from missing the steering The occurrence of intersections, violations, etc.
- the switching prompt sound can be, for example, a short beep sound, a human voice prompt, and so on.
- the specific form of the prompt tone is not specifically limited here.
- Fig. 4 is a structural diagram of a navigation audio playback device provided by an embodiment of the application.
- the device can be implemented on the server side, for example, it can be a server-side application or a plug-in or a software development kit (Software Development Kit, located in the server-side application). SDK) and other functional units. Or if the terminal device has sufficient computing power, it can also be implemented on the terminal device side.
- the device may include: a navigation determination unit 00 and a broadcast processing unit 10, wherein the main functions of each component unit are as follows:
- the navigation determination unit 00 is responsible for determining the navigation audio and the broadcast location point to be broadcast in the navigation route.
- the navigation determining unit 00 may select a navigation audio with an importance level matching the familiarity level from the navigation audio of the navigation route as the navigation audio to be broadcast according to the user's familiarity with the navigation route and the importance of the navigation audio.
- the user's familiarity with the navigation route can be determined based on the number of times the user has navigated the route in history. If the number of times the user has navigated the route in history exceeds the preset number threshold, it can be considered that the user is familiar with the navigation route.
- the broadcast processing unit 10 is responsible for ordering and reporting the corresponding navigation audio at the broadcast location, and the non-navigation audio to be played is selected according to the gap time between the broadcast location points.
- the broadcast processing unit 10 may specifically include: a scene judgment subunit 11 and a content recommendation subunit 12.
- the scene judgment subunit 11 is responsible for determining the location of the user when the current navigation audio or non-navigation audio is played; and according to the user's location, determining the estimated time of arrival to the broadcast location of the next navigation audio.
- the estimated arrival time to the broadcast location of the next navigation audio can be estimated based on the distance between the location of the user and the broadcast location of the next navigation audio, the user's speed, and road conditions.
- the scene judgment subunit 11 may provide the user's location and the broadcast position of the next navigation audio to the ETA service by calling the ETA service interface, and the ETA service will estimate the estimated arrival time and return it to the scene judgment subunit 11.
- the content recommendation subunit 12 is responsible for selecting the next non-navigation audio to be played according to the estimated arrival time.
- the content recommendation subunit 12 may use, but is not limited to, the following methods to select the next non-navigation audio to be played.
- Method 1 If the estimated time of arrival is greater than the preset first time threshold, the next navigation audio is considered to be farther away, and the user's demand priority selection method is adopted. That is, select the non-navigation audio required by the user from the non-navigation audio whose audio duration or the playback duration of the core content is less than the estimated arrival time;
- the next navigation audio is considered to be close, and the time priority selection method is adopted. That is, from the non-navigation audio whose audio duration or the playback duration of the core content is less than and close to the estimated arrival time, select the non-navigation audio required by the user, and the difference between the estimated arrival time and the audio duration or the playback duration of the core content is less than The second duration threshold;
- the foregoing first duration threshold is greater than the second duration threshold.
- the content recommendation sub-unit 12 can obtain non-navigation audio from an audio pool for selection, where the audio pool can be an audio pool maintained by a service provider of a map application, or an audio provided by a service provider of a third-party application with which it has a cooperative relationship. Pool.
- the audio pool contains various types of non-navigation audio, including but not limited to news, novels, music, songs, jokes, and so on.
- the audio pool also maintains the audio duration and core content identifiers of each non-navigation audio.
- the core content identification refers to the identification of the start time and the end time of the core content of the non-navigation audio, and the playback duration of the core content can be determined by the identification.
- the content recommendation subunit 12 may determine the non-navigation audio required by the user according to at least one of destination, environmental conditions, route conditions, user driving conditions, and user preference information.
- the above-mentioned broadcast processing unit 10 can also play a switching prompt sound between non-navigation audio and navigation audio to give the user anticipation, thereby reminding the user to listen to the navigation audio to be played below, so as to prevent the user from missing the turn to the intersection and breaking the rules. Wait for the situation to happen.
- navigation route 1 is the work route of user A from home to company. Because User A is familiar with the route, the navigation broadcast content is mainly based on turning points, and each intersection is only reminded once, and the electronic eye information is not prompted.
- the audio types that users prefer are mainly news, music, and jokes.
- news A is recommended first, and the playing time of news A is less than the estimated time of reaching the broadcast location 1 by the user.
- the estimated time of arrival of the user from the broadcast location 1 is less than 4 minutes, and the music a that the user is interested in will be played to fill in the time.
- the estimated time of arrival is less than 10 seconds. No longer insert other non-navigation audio, play the switching prompt for the user, and switch to the navigation audio of the broadcast position point 1 "Keep right uphill, enter the high speed, and go to G6".
- the music b that the user is interested in will be played to fill in the time.
- the estimated arrival time from the broadcast position point 2 is less than 10 seconds, and no other non-navigation audio is inserted.
- the navigation audio at point 2 of the broadcast position is played "Keep ahead to the left and enter the North Fifth Ring Road".
- the news G began to be played.
- the distance between the user and the broadcast location point 3 is less than 4 minutes. Since the user has been driving for a long time and is exhausted, he starts to play a joke that fits the scene at the time c to help the user refresh.
- the news H/I/J/K/L continued to be broadcast.
- the navigation audio broadcast at the broadcast location point 4 is directly carried out "keep right ahead, exit the highway, and head towards the exit of Shangdi West Road”.
- the user After the user gets off the highway, he is about to enter the extremely slow section of the road. In order to avoid the user's distraction and cause a car accident, the user is played calming music d/e/f. Then, after the switching prompt sound is played, point 5 at the broadcast position to broadcast "turn left" until the user reaches the destination.
- the present application also provides an electronic device and a readable storage medium.
- FIG. 6 it is a block diagram of an electronic device of a method for playing navigation audio according to an embodiment of the present application.
- Electronic devices are intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
- Electronic devices can also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices.
- the components shown herein, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementation of the application described and/or required herein.
- the electronic device includes one or more processors 601, a memory 602, and interfaces for connecting various components, including a high-speed interface and a low-speed interface.
- the various components are connected to each other using different buses, and can be installed on a common motherboard or installed in other ways as needed.
- the processor may process instructions executed in the electronic device, including instructions stored in or on the memory to display graphical information of the GUI on an external input/output device (such as a display device coupled to an interface).
- an external input/output device such as a display device coupled to an interface.
- multiple processors and/or multiple buses can be used with multiple memories and multiple memories.
- multiple electronic devices can be connected, and each device provides part of the necessary operations (for example, as a server array, a group of blade servers, or a multi-processor system).
- a processor 601 is taken as an example.
- the memory 602 is a non-transitory computer-readable storage medium provided by this application.
- the memory stores instructions executable by at least one processor, so that the at least one processor executes the navigation audio playback method provided in this application.
- the non-transitory computer-readable storage medium of the present application stores computer instructions, and the computer instructions are used to make the computer execute the navigation audio playback method provided by the present application.
- the memory 602 can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as program instructions/modules corresponding to the navigation audio playback method in the embodiment of the present application.
- the processor 601 executes various functional applications and data processing of the server by running non-transient software programs, instructions, and modules stored in the memory 602, that is, implements the navigation audio playback method in the foregoing method embodiment.
- the memory 602 may include a program storage area and a data storage area.
- the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the electronic device.
- the memory 602 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
- the memory 602 may optionally include memories remotely provided with respect to the processor 601, and these remote memories may be connected to the electronic device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
- the electronic device may further include: an input device 603 and an output device 604.
- the processor 601, the memory 602, the input device 603, and the output device 604 may be connected by a bus or in other ways. In FIG. 6, the connection by a bus is taken as an example.
- the input device 603 can receive input digital or character information, and generate key signal input related to the user settings and function control of the electronic device, such as touch screen, keypad, mouse, track pad, touch pad, indicator stick, one or more A mouse button, trackball, joystick and other input devices.
- the output device 604 may include a display device, an auxiliary lighting device (for example, LED), a tactile feedback device (for example, a vibration motor), and the like.
- the display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen.
- Various implementations of the systems and techniques described herein can be implemented in digital electronic circuit systems, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: being implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, the programmable processor It can be a dedicated or general-purpose programmable processor that can receive data and instructions from the storage system, at least one input device, and at least one output device, and transmit the data and instructions to the storage system, the at least one input device, and the at least one output device. An output device.
- machine-readable medium and “computer-readable medium” refer to any computer program product, device, and/or device used to provide machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memory, programmable logic devices (PLD)), including machine-readable media that receive machine instructions as machine-readable signals.
- machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
- the systems and techniques described here can be implemented on a computer that has: a display device for displaying information to the user (for example, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) ); and a keyboard and a pointing device (for example, a mouse or a trackball) through which the user can provide input to the computer.
- a display device for displaying information to the user
- LCD liquid crystal display
- keyboard and a pointing device for example, a mouse or a trackball
- Other types of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, voice input, or tactile input) to receive input from the user.
- the systems and technologies described herein can be implemented in a computing system that includes back-end components (for example, as a data server), or a computing system that includes middleware components (for example, an application server), or a computing system that includes front-end components (for example, A user computer with a graphical user interface or a web browser, through which the user can interact with the implementation of the system and technology described herein), or includes such back-end components, middleware components, Or any combination of front-end components in a computing system.
- the components of the system can be connected to each other through any form or medium of digital data communication (for example, a communication network). Examples of communication networks include: local area network (LAN), wide area network (WAN), and the Internet.
- the computer system can include clients and servers.
- the client and server are generally far away from each other and usually interact through a communication network.
- the relationship between the client and the server is generated by computer programs that run on the corresponding computers and have a client-server relationship with each other.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Navigation (AREA)
Abstract
Description
Claims (18)
- 一种导航音频的播放方法,包括:确定导航路线中需要播报的导航音频和播报位置点;在所述播报位置点播报对应的导航音频,在各所述播报位置点之间的空档时间依据空档时长选择播放的非导航音频。
- 根据权利要求1所述的方法,其中,所述确定导航路线中需要播报的导航音频包括:依据用户对所述导航路线的熟悉程度和导航音频的重要程度,从导航路线的导航音频中选择重要程度与所述熟悉程度匹配的导航音频作为需要播报的导航音频。
- 根据权利要求1所述的方法,其中,所述在各所述播报位置点之间的空档时间依据空档时长选择播放的非导航音频包括:确定播放完当前导航音频或非导航音频时用户所在位置;依据所述用户所在位置,确定到达下一条导航音频的播报位置点的预估到达时长;依据所述预估到达时长选择要播放的下一条非导航音频。
- 根据权利要求3所述的方法,其中,依据所述预估到达时长选择要播放的下一条非导航音频包括:若所述预估到达时长大于预设的第一时长阈值,则从音频时长或核心内容的播放时长小于所述预估到达时长的非导航音频中,选择所述用户需求的非导航音频;若所述预估到达时长大于或等于预设的第二时长阈值且小于或等于所述第一时长阈值,则从音频时长或核心内容的播放时长小于且接近所述预估到达时长的非导航音频中,选择所述用户需求的非导航音频,所述接近为所述预估到达时长与所述音频时长或核心内容的播放时长的差值小于所述第二时长阈值;所述第一时长阈值大于所述第二时长阈值。
- 根据权利要求3所述的方法,其中,依据所述预估到达时长选择要播放的下一条非导航音频包括:若所述预估到达时长大于预设的第二时长阈值,则从用户需求的非 导航音频中,选择音频时长或核心内容的播放时长小于且最接近所述预估到达时长的非导航音频。
- 根据权利要求4或5所述的方法,其中,依据所述预估到达时长选择要播放的下一条非导航音频还包括:若所述预估到达时长小于所述第二时长阈值,则不选择任何非导航音频。
- 根据权利要求4或5所述的方法,其中,所述用户需求的非导航音频依据目的地、环境状况、路线状况、用户驾驶状况和用户偏好信息中的至少一种确定。
- 根据权利要求1所述的方法,还包括:在非导航音频和导航音频之间播放切换提示音。
- 一种导航音频的播放装置,包括:导航确定单元,用于确定导航路线中需要播报的导航音频和播报位置点;播报处理单元,用于在所述播报位置点播报对应的导航音频,在各所述播报位置点之间的空档时间依据空档时长选择播放的非导航音频。
- 根据权利要求9所述的装置,其中,所述导航确定单元,具体用于依据用户对所述导航路线的熟悉程度和导航音频的重要程度,从导航路线的导航音频中选择重要程度与所述熟悉程度匹配的导航音频作为需要播报的导航音频。
- 根据权利要求9所述的装置,其中,所述播报处理单元具体包括:场景判断子单元,用于确定播放完当前导航音频或非导航音频时用户所在位置;依据所述用户所在位置,确定到达下一条导航音频的播报位置点的预估到达时长;内容推荐子单元,用于依据所述预估到达时长选择要播放的下一条非导航音频。
- 根据权利要求11所述的装置,其中,所述内容推荐子单元,具体用于:若所述预估到达时长大于预设的第一时长阈值,则从音频时长或核心内容的播放时长小于所述预估到达时长的非导航音频中,选择所述用 户需求的非导航音频;若所述预估到达时长大于或等于预设的第二时长阈值且小于或等于所述第一时长阈值,则从音频时长或核心内容的播放时长小于且接近所述预估到达时长的非导航音频中,选择所述用户需求的非导航音频,所述接近为所述预估到达时长与所述音频时长或核心内容的播放时长的差值小于所述第二时长阈值;所述第一时长阈值大于所述第二时长阈值。
- 根据权利要求11所述的装置,其中,所述内容推荐子单元,具体用于:若所述预估到达时长大于预设的第二时长阈值,则从用户需求的非导航音频中,选择音频时长或核心内容的播放时长小于且最接近所述预估到达时长的非导航音频。
- 根据权利要求12或13所述的装置,其中,所述内容推荐子单元,还用于:若所述预估到达时长小于所述第二时长阈值,则不选择任何非导航音频。
- 根据权利要求12或13所述的装置,其中,所述内容推荐子单元,还用于依据目的地、环境状况、路线状况、用户驾驶状况和用户偏好信息中的至少一种确定所述用户需求的非导航音频。
- 根据权利要求9所述的装置,其中,所述播报处理单元,还用于在非导航音频和导航音频之间播放切换提示音。
- 一种电子设备,其特征在于,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-8中任一项所述的方法。
- 一种存储有计算机指令的非瞬时计算机可读存储介质,其特征在于,所述计算机指令用于使所述计算机执行权利要求1-8中任一项所述的方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11202107063XA SG11202107063XA (en) | 2020-05-22 | 2020-11-25 | Method, apparatus, device for playing navigation audios, and computer storage medium |
US17/419,013 US20220308826A1 (en) | 2020-05-22 | 2020-11-25 | Method, apparatus, device for playing navigation audios |
KR1020217027814A KR20210114537A (ko) | 2020-05-22 | 2020-11-25 | 네비게이션 오디오 방송 방법, 장치, 기기 및 컴퓨터 저장 매체 |
EP20900748.3A EP3940341A4 (en) | 2020-05-22 | 2020-11-25 | METHOD, APPARATUS AND DEVICE FOR AUDIO PLAYBACK OF NAVIGATION, AND COMPUTER RECORDING MEDIUM |
JP2021538075A JP7383026B2 (ja) | 2020-05-22 | 2020-11-25 | ナビゲーションオーディオの再生方法、装置、機器及びコンピュータ記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010439893.5A CN111735472A (zh) | 2020-05-22 | 2020-05-22 | 一种导航音频的播放方法、装置、设备和计算机存储介质 |
CN202010439893.5 | 2020-05-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021232726A1 true WO2021232726A1 (zh) | 2021-11-25 |
Family
ID=72647558
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/131319 WO2021232726A1 (zh) | 2020-05-22 | 2020-11-25 | 一种导航音频的播放方法、装置、设备和计算机存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111735472A (zh) |
WO (1) | WO2021232726A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114882721A (zh) * | 2022-05-27 | 2022-08-09 | 中国第一汽车股份有限公司 | 一种车载导航信息播放方法、装置、电子设备及存储介质 |
CN114973740A (zh) * | 2022-06-06 | 2022-08-30 | 北京百度网讯科技有限公司 | 语音播报时机的确定方法、装置及电子设备 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111735472A (zh) * | 2020-05-22 | 2020-10-02 | 百度在线网络技术(北京)有限公司 | 一种导航音频的播放方法、装置、设备和计算机存储介质 |
CN112767909A (zh) * | 2021-01-27 | 2021-05-07 | 腾讯科技(深圳)有限公司 | 音频混音方法、装置、介质以及电子设备 |
CN114816608A (zh) * | 2021-01-29 | 2022-07-29 | 腾讯科技(深圳)有限公司 | 媒体文件的播放方法、装置、电子设备及存储介质 |
CN112857392A (zh) * | 2021-02-25 | 2021-05-28 | 北京百度网讯科技有限公司 | 导航语音播报方法、装置、设备以及存储介质 |
CN115086705A (zh) * | 2021-03-12 | 2022-09-20 | 北京字跳网络技术有限公司 | 一种资源预加载方法、装置、设备和存储介质 |
CN113434309B (zh) * | 2021-06-23 | 2024-06-21 | 东风汽车有限公司东风日产乘用车公司 | 一种消息播报方法、装置及存储介质 |
CN113934397B (zh) * | 2021-10-15 | 2024-09-03 | 深圳市一诺成电子有限公司 | 电子设备中播音控制方法及电子设备 |
CN115842945A (zh) * | 2022-12-02 | 2023-03-24 | 中国第一汽车股份有限公司 | 一种基于导航数据的车载媒体内容播放方法、装置 |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2800428Y (zh) * | 2005-01-26 | 2006-07-26 | 菱科电子技术(中国)有限公司 | 车载gps导航、娱乐系统 |
CN101419077A (zh) * | 2008-11-19 | 2009-04-29 | 凯立德欣技术(深圳)有限公司 | 语音播报的方法及使用此方法的语音播报装置、导航系统 |
CN101469995A (zh) * | 2007-12-27 | 2009-07-01 | 英业达股份有限公司 | 导航及多媒体切换方法及应用其的电子装置 |
CN102768044A (zh) * | 2012-07-31 | 2012-11-07 | 深圳市赛格导航科技股份有限公司 | 一种可记录用户行车习惯的导航仪及其记录和重放方法 |
CN106653064A (zh) * | 2016-12-13 | 2017-05-10 | 北京云知声信息技术有限公司 | 音频播放方法及装置 |
CN107170472A (zh) * | 2016-03-08 | 2017-09-15 | 阿里巴巴集团控股有限公司 | 一种车载音频数据播放方法和设备 |
CN107819949A (zh) * | 2017-11-01 | 2018-03-20 | 深圳天珑无线科技有限公司 | 信息播放方法、终端及计算机可读存储介质 |
CN110717094A (zh) * | 2019-09-03 | 2020-01-21 | 平安科技(深圳)有限公司 | 信息推荐方法、装置、计算机设备和存储介质 |
CN111081283A (zh) * | 2019-12-25 | 2020-04-28 | 惠州Tcl移动通信有限公司 | 一种音乐播放方法、装置、存储介质及终端设备 |
US10641613B1 (en) * | 2014-03-14 | 2020-05-05 | Google Llc | Navigation using sensor fusion |
CN111735472A (zh) * | 2020-05-22 | 2020-10-02 | 百度在线网络技术(北京)有限公司 | 一种导航音频的播放方法、装置、设备和计算机存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160045353A (ko) * | 2014-10-17 | 2016-04-27 | 현대자동차주식회사 | 에이브이엔 장치, 차량, 및 에이브이엔 장치의 제어방법 |
CN110316076A (zh) * | 2018-03-29 | 2019-10-11 | 蔚来汽车有限公司 | 用于播报导航信息的方法、装置和计算机存储介质 |
CN109151185A (zh) * | 2018-08-01 | 2019-01-04 | 张家港市鸿嘉数字科技有限公司 | 一种根据车辆行驶场景匹配音乐类型的方法及装置 |
CN110017847B (zh) * | 2019-03-21 | 2021-03-16 | 腾讯大地通途(北京)科技有限公司 | 一种自适应导航语音播报方法、装置及系统 |
CN110174116B (zh) * | 2019-04-15 | 2020-03-31 | 北京百度网讯科技有限公司 | 生成导航播报内容的方法、装置、设备和计算机存储介质 |
CN110068353A (zh) * | 2019-04-29 | 2019-07-30 | 上海擎感智能科技有限公司 | 车载导航设备及其导航方法 |
CN110264760B (zh) * | 2019-06-21 | 2021-12-07 | 腾讯科技(深圳)有限公司 | 一种导航语音播放方法、装置及电子设备 |
-
2020
- 2020-05-22 CN CN202010439893.5A patent/CN111735472A/zh active Pending
- 2020-11-25 WO PCT/CN2020/131319 patent/WO2021232726A1/zh unknown
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2800428Y (zh) * | 2005-01-26 | 2006-07-26 | 菱科电子技术(中国)有限公司 | 车载gps导航、娱乐系统 |
CN101469995A (zh) * | 2007-12-27 | 2009-07-01 | 英业达股份有限公司 | 导航及多媒体切换方法及应用其的电子装置 |
CN101419077A (zh) * | 2008-11-19 | 2009-04-29 | 凯立德欣技术(深圳)有限公司 | 语音播报的方法及使用此方法的语音播报装置、导航系统 |
CN102768044A (zh) * | 2012-07-31 | 2012-11-07 | 深圳市赛格导航科技股份有限公司 | 一种可记录用户行车习惯的导航仪及其记录和重放方法 |
US10641613B1 (en) * | 2014-03-14 | 2020-05-05 | Google Llc | Navigation using sensor fusion |
CN107170472A (zh) * | 2016-03-08 | 2017-09-15 | 阿里巴巴集团控股有限公司 | 一种车载音频数据播放方法和设备 |
CN106653064A (zh) * | 2016-12-13 | 2017-05-10 | 北京云知声信息技术有限公司 | 音频播放方法及装置 |
CN107819949A (zh) * | 2017-11-01 | 2018-03-20 | 深圳天珑无线科技有限公司 | 信息播放方法、终端及计算机可读存储介质 |
CN110717094A (zh) * | 2019-09-03 | 2020-01-21 | 平安科技(深圳)有限公司 | 信息推荐方法、装置、计算机设备和存储介质 |
CN111081283A (zh) * | 2019-12-25 | 2020-04-28 | 惠州Tcl移动通信有限公司 | 一种音乐播放方法、装置、存储介质及终端设备 |
CN111735472A (zh) * | 2020-05-22 | 2020-10-02 | 百度在线网络技术(北京)有限公司 | 一种导航音频的播放方法、装置、设备和计算机存储介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114882721A (zh) * | 2022-05-27 | 2022-08-09 | 中国第一汽车股份有限公司 | 一种车载导航信息播放方法、装置、电子设备及存储介质 |
CN114882721B (zh) * | 2022-05-27 | 2023-05-09 | 中国第一汽车股份有限公司 | 一种车载导航信息播放方法、装置、电子设备及存储介质 |
CN114973740A (zh) * | 2022-06-06 | 2022-08-30 | 北京百度网讯科技有限公司 | 语音播报时机的确定方法、装置及电子设备 |
CN114973740B (zh) * | 2022-06-06 | 2023-09-12 | 北京百度网讯科技有限公司 | 语音播报时机的确定方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN111735472A (zh) | 2020-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021232726A1 (zh) | 一种导航音频的播放方法、装置、设备和计算机存储介质 | |
US11874124B2 (en) | Duration-based customized media program | |
US11060878B2 (en) | Generating personalized routes with user route preferences | |
US11868396B2 (en) | Generating and distributing playlists with related music and stories | |
RU2707410C2 (ru) | Автомобильный мультимодальный интерфейс | |
KR102393817B1 (ko) | 인간 대 컴퓨터 다이얼로그들에 요청되지 않은 콘텐츠의 사전 통합 | |
US20140201004A1 (en) | Managing Interactive In-Vehicle Advertisements | |
US20190342359A1 (en) | Saving Media for Audio Playout | |
US9535654B2 (en) | Method and apparatus for associating an audio soundtrack with one or more video clips | |
WO2017166593A1 (zh) | 一种基于地图的导航方法、装置和存储介质 | |
US10809973B2 (en) | Playlist selection for audio streaming | |
WO2017166591A1 (zh) | 基于地图的导航方法、装置、存储介质及设备 | |
US11248927B2 (en) | Systems and methods for providing uninterrupted media content during vehicle navigation | |
US20220252412A1 (en) | Systems and methods for providing uninterrupted media content during vehicle navigation | |
US11402231B2 (en) | Systems and methods for providing uninterrupted media content during vehicle navigation | |
WO2024037086A1 (zh) | 出行信息分享方法、装置、计算机设备及存储介质 | |
US20220308826A1 (en) | Method, apparatus, device for playing navigation audios | |
JP2006306242A (ja) | 車載情報提供装置 | |
JP2005274490A (ja) | カーナビゲーション装置及びカーナビゲーション装置用制御方法 | |
CN117290606A (zh) | 推荐信息的展示方法、装置、系统、设备及存储介质 | |
CN116105754A (zh) | 导航信息获取方法、电子设备和存储介质 | |
CN112767909A (zh) | 音频混音方法、装置、介质以及电子设备 | |
JP2020118894A (ja) | 再生制御装置、再生装置、再生制御方法、およびプログラム | |
JP2003287430A (ja) | 経路設定方法、経路設定サーバ、経路設定装置、及び経路設定プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021538075 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2020900748 Country of ref document: EP Effective date: 20210625 |
|
ENP | Entry into the national phase |
Ref document number: 20217027814 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20900748 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |