CN116164772A - Navigation voice information processing method, device, equipment, medium and product - Google Patents

Navigation voice information processing method, device, equipment, medium and product Download PDF

Info

Publication number
CN116164772A
CN116164772A CN202310121684.XA CN202310121684A CN116164772A CN 116164772 A CN116164772 A CN 116164772A CN 202310121684 A CN202310121684 A CN 202310121684A CN 116164772 A CN116164772 A CN 116164772A
Authority
CN
China
Prior art keywords
scene
navigation
information
voice
changeable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310121684.XA
Other languages
Chinese (zh)
Inventor
刘顺
胡波
张翔
张海波
赵明
张俊
杨夕凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310121684.XA priority Critical patent/CN116164772A/en
Publication of CN116164772A publication Critical patent/CN116164772A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Navigation (AREA)

Abstract

The present disclosure relates to a navigation voice information processing method, apparatus, device, medium and product, the method comprising: acquiring change configuration information of a voice packet to be broadcasted, wherein the voice packet to be broadcasted comprises original navigation voice information of each scene, at least one scene is a changeable scene, the change configuration information comprises change contents of the original navigation voice information of each changeable scene, and a preset change mode for each changeable scene; and changing the original navigation information of the changeable scene in the voice packet to be broadcasted according to the changing configuration information so as to obtain the changed navigation information of the changeable scene. According to the technical scheme, the modified navigation voice information can have the personal language characteristics of the person corresponding to the voice packet, so that the purpose of highlighting the personal language characteristics of the person corresponding to the voice packet is achieved, and the interestingness of the voice broadcasting function is improved.

Description

Navigation voice information processing method, device, equipment, medium and product
Technical Field
The disclosure relates to the technical field of electronic map navigation, in particular to a navigation voice information processing method, a device, equipment, a medium and a product.
Background
In the field of electronic map navigation, the problems of complicated roads, difficult-to-grasp road conditions, optimal route selection and the like are solved, so that the dependence of people on electronic map navigation software is increased during traveling. During driving, a user often cannot operate the mobile phone or the vehicle-mounted device by hand or stare at the mobile phone screen or the vehicle-mounted device screen with eyes, because once distraction is performed, a major accident is very easy to occur. In order to free the hands and eyes of a user, a navigation voice broadcasting function of the electronic map navigation software becomes important.
In the navigation voice broadcasting function of the existing electronic map navigation software, broadcasting speech techniques used by a plurality of different voice packages are unified, personal language characteristics of characters corresponding to the voice packages cannot be revealed, and interestingness is lacking.
Disclosure of Invention
In order to solve the technical problems described above or at least partially solve the technical problems described above, the present disclosure provides a navigation information processing method, apparatus, device, medium, and product.
In a first aspect, the present disclosure provides a method for processing navigation speech information, including:
acquiring change configuration information of a voice packet to be broadcasted, wherein the voice packet to be broadcasted comprises original navigation information of each scene, at least one scene is a changeable scene, the change configuration information comprises change contents of the original navigation information of each changeable scene, and a preset change mode for each changeable scene;
And changing the original navigation information of the changeable scene in the voice packet to be broadcasted according to the changing configuration information so as to obtain the changed navigation information of the changeable scene.
In a second aspect, the present disclosure further provides a navigation voice broadcasting method, including:
receiving navigation information after changing of a changeable scene, wherein the navigation information after changing is obtained by changing the navigation information of the original changeable scene in a voice packet to be broadcasted according to changing configuration information; the voice package to be broadcasted comprises original navigation information of each scene, and at least one scene is a changeable scene; the change configuration information comprises change contents of original navigation voice information of each changeable scene and a preset change mode for each changeable scene;
and performing navigation voice broadcasting according to the voice packet to be broadcasted and the navigation voice operation information after the scene change.
In a third aspect, the present disclosure also provides a navigation session information processing apparatus, including:
the voice broadcasting system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring change configuration information of a voice packet to be broadcasted, the voice packet to be broadcasted comprises original navigation voice information of each scene, at least one scene is a changeable scene, the change configuration information comprises change contents of the original navigation voice information of each changeable scene, and a preset change mode for each changeable scene;
And the changing module is used for changing the original navigation voice operation information of the changeable scene in the voice packet to be broadcasted according to the changing configuration information so as to obtain the navigation voice operation information after the change of the changeable scene.
In a fourth aspect, the present disclosure further provides a navigation voice broadcasting device, including:
the receiving module is used for receiving the navigation information after the change of the changeable scene, wherein the navigation information after the change is obtained by changing the navigation information of the original changeable scene in the voice packet to be broadcasted according to the change configuration information; the voice package to be broadcasted comprises original navigation information of each scene, and at least one scene is a changeable scene; the change configuration information comprises change contents of original navigation voice information of each changeable scene and a preset change mode for each changeable scene;
and the broadcasting module is used for carrying out navigation voice broadcasting according to the voice packet to be broadcasted and the navigation voice operation information after the scene change.
In a fifth aspect, the present disclosure also provides an electronic device, including: a memory and a processor, wherein the memory is configured to store,
the memory is used for storing the processor executable instructions;
The processor is used for reading the executable instructions from the memory and executing the executable instructions to realize any one of the navigation voice information processing method or the navigation voice broadcasting method.
In a sixth aspect, the present disclosure also provides a computer-readable storage medium storing a computer program that is executed by a processor to implement any one of the above-described navigation voice information processing methods or navigation voice broadcasting methods.
In a seventh aspect, embodiments of the present disclosure further provide a computer program product for executing any one of the above-mentioned navigation voice information processing method or navigation voice broadcasting method.
In the technical scheme of the navigation voice information processing method provided by the embodiment of the disclosure, change configuration information of a voice packet to be broadcasted is set and obtained, the voice packet to be broadcasted comprises original navigation voice information of each scene, at least one scene is a changeable scene, the change configuration information comprises change contents of the original navigation voice information of each changeable scene, and a preset change mode is set for each changeable scene; according to the configuration information, the original navigation phone information of the changeable scene in the voice packet to be broadcasted is changed to obtain the navigation phone information of the changeable scene after the change, so that the original navigation phone information in the voice packet to be broadcasted can be subjected to individual change, the navigation phone information after the change has the individual language characteristics of the person corresponding to the voice packet, the purpose of displaying the individual language characteristics of the person corresponding to the voice packet is achieved, the interestingness of the voice broadcast function is improved, and the user experience is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of a navigation system provided in an embodiment of the present disclosure;
FIG. 2 is a flowchart of the operation of the navigation system provided in FIG. 1;
FIG. 3 is a schematic diagram of another navigation system provided by an embodiment of the present disclosure;
fig. 4 is a flow chart of a method for processing navigation voice information according to an embodiment of the disclosure;
fig. 5 is a flowchart of a navigation voice broadcasting method provided in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a navigation information processing device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a navigation voice broadcasting device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
Fig. 1 is a schematic diagram of a navigation system according to an embodiment of the disclosure. The navigation system can be used for realizing the navigation voice operation information processing method and the navigation voice broadcasting method provided by the embodiment of the disclosure.
Referring to fig. 1, the navigation system includes an online service device and a map navigation broadcasting system (TBT). The map navigation broadcasting system comprises terminal equipment and cloud equipment. The online service equipment is in communication connection with the cloud equipment. The terminal equipment is in communication connection with the cloud equipment.
Fig. 2 is a flowchart of the operation of the navigation system provided in fig. 1. With continued reference to fig. 2, a plurality of change configuration information is stored in the online service device, where each change configuration information corresponds to a voice packet. When the user uses a certain voice packet for navigation, the user selects a voice packet that he wishes to use (hereinafter referred to as a voice packet to be broadcasted) on the terminal device. The terminal equipment sends a voice packet identifier of a voice packet to be broadcasted to the cloud equipment; the cloud end equipment sends a configuration information acquisition request comprising a voice packet identifier to the online service equipment; the online service equipment responds to the acquisition request, inquires change configuration information corresponding to the voice packet identifier, and sends the inquired change configuration information of the voice packet to be broadcasted to the cloud end equipment. The cloud device changes the original navigation phone information of the changeable scene in the voice packet to obtain the changed navigation phone information. The cloud device further encodes the changed navigation information to form a binary code, and then sends the binary code to the terminal device. And the terminal equipment decodes and assembles the binary code received by the terminal equipment to obtain the navigation information after the change. And then, the terminal equipment adds the changed navigation information into the broadcasting voice pool according to the navigation requirement, sorts and simplifies all the navigation information in the broadcasting voice pool, and determines the final navigation information to be broadcasted. And the terminal equipment performs voice broadcasting based on the final navigation voice information to be broadcasted.
The terminal device may be a mobile terminal or a vehicle-mounted device. The cloud device may specifically be a server.
Fig. 3 is a schematic diagram of another navigation system provided in an embodiment of the present disclosure. The navigation system can be used for realizing the navigation voice operation information processing method and the navigation voice broadcasting method provided by the embodiment of the disclosure. Compared with fig. 1, the map navigation broadcasting system only comprises terminal equipment.
The online service equipment stores a plurality of pieces of change configuration information, and each piece of change configuration information corresponds to one voice packet. When the user uses a certain voice packet for navigation, the user selects a voice packet that he wishes to use (hereinafter referred to as a voice packet to be broadcasted) on the terminal device. The method comprises the steps that terminal equipment determines a voice packet identifier of a voice packet to be broadcasted and sends a configuration information acquisition request comprising the voice packet identifier to online service equipment; the online service equipment responds to the acquisition request, inquires change configuration information corresponding to the voice packet identifier, and sends the inquired change configuration information of the voice packet to be broadcasted to the terminal equipment. The terminal equipment changes the original navigation phone information of the changeable scene in the voice packet to obtain the changed navigation phone information. And then, the terminal equipment adds the changed navigation information into the broadcasting voice pool according to the navigation requirement, sorts and simplifies all the navigation information in the broadcasting voice pool, determines the navigation information which is required to be broadcasted finally, and carries out voice broadcasting on the navigation information which is required to be broadcasted finally.
The terminal device may be a mobile terminal or a vehicle-mounted device.
Fig. 4 is a flow chart of a method for processing navigation information according to an embodiment of the present disclosure, where the method for processing navigation information may be performed by a navigation information processing apparatus, and the apparatus may be a part of an electronic map, may be implemented by using software and/or hardware, and may be integrated on any electronic device having computing capability, such as a terminal device or a cloud device. The terminal equipment can specifically comprise vehicle-mounted equipment or a mobile terminal.
As shown in fig. 4, the navigation information processing method includes:
s110, acquiring change configuration information of a voice packet to be broadcasted, wherein the voice packet to be broadcasted comprises original navigation information of each scene, at least one scene is a changeable scene, the change configuration information comprises change contents of the original navigation information of each changeable scene, and a preset change mode for each changeable scene.
The voice package is a set of original navigation information, and different original navigation information corresponds to different scenes. The voice packets correspond to characters. The character may be a real character or a virtual character, such as a star or a cartoon character.
The navigation speaking information refers to navigation broadcasting content in the traveling process of the user. For example, a traffic light intersection turns right after one hundred meters. The original navigation information refers to navigation broadcasting content before modification.
The scene refers to a scene requiring voice broadcasting. Typically, a scene corresponds to a location. By way of example, the scene may be a traffic intersection, a school, a service area, a congested road segment, or the like. In the voice packet to be broadcasted, a scene may correspond to an original navigation information. One original navigation speech information includes one or more navigation broadcast sentences. One sentence navigation broadcasting sentence is a sentence.
The scene corresponding to the change content included in the change configuration information is the changeable scene. The change content is the basis for changing the original navigation information, and can be text information or audio information. For example, the modification content may be voice information prerecorded by a person corresponding to the voice packet. Alternatively, the change content may be a conversation authored by an operator of the electronic map. The change content may be a single sentence or a plurality of sentences, or may be a common phrase. The modification scheme defines how the original navigation information is modified by using the modification content.
In one example, the preset modification includes at least one of front-end hooking, back-end hooking, whole sentence replacement, phrase replacement, and global replacement. The front-end hooking refers to hooking the changed content to the original navigation information. The back-end hooking refers to hooking the changed content to the original navigation information. And the whole sentence replacement is to replace a certain sentence of navigation broadcasting sentence in the original navigation operation information by using the change content. Phrase replacement refers to replacing a phrase in the original navigation information with the changed content. Global replacement refers to the overall replacement of original navigation information with changed content.
In practice, the change configuration information may include change contents and change modes for a plurality of changeable scenes. The content of the change of the different changeable scenes may be the same or different, and the present application is not limited thereto. The changing modes of different changeable scenes may be the same or different, and the present application is not limited to this.
In some embodiments, the alteration content may be a vocal or literary work such as a greeting, a blessing, a warning sentence, a humorous grade, a segment, etc.
The specific implementation method of this step is various, and this application is not limited thereto. In one embodiment, if the execution subject of the step is a cloud device in a map navigation broadcasting system, the implementation method of the step includes: acquiring a voice packet identifier of a voice packet to be broadcasted, which is sent by a terminal device; transmitting a configuration information acquisition request comprising a voice packet identifier to an online service device; and receiving the change configuration information of the voice packet to be broadcasted, which is fed back by the online service equipment.
The voice packet identification refers to information capable of distinguishing a voice packet from other voice packets. Such as a voice packet name or a voice packet ID, etc.
Optionally, in a specific implementation, a plurality of pieces of change configuration information are stored in advance on the online service device, and different pieces of change configuration information correspond to different voice packet identifiers. When the user uses a certain voice packet for navigation, the user selects a voice packet that he wishes to use (hereinafter referred to as a voice packet to be broadcasted) on the terminal device. The terminal equipment sends a voice packet identifier of a voice packet to be broadcasted to the cloud equipment; the cloud end equipment sends a configuration information acquisition request comprising a voice packet identifier to the online service equipment; the online service equipment responds to the acquisition request, inquires change configuration information corresponding to the voice packet identifier, and sends the inquired change configuration information of the voice packet to be broadcasted to the cloud end equipment, so that the cloud end equipment receives the change configuration information of the voice packet to be broadcasted, which is fed back by the online service equipment.
In one embodiment, if the execution subject of the step is a terminal device in a map navigation broadcasting system, the implementation method of the step includes: transmitting a configuration information acquisition request comprising a voice packet identifier to an online service device; and receiving the change configuration information of the voice packet to be broadcasted, which is fed back by the online service equipment.
Optionally, in a specific implementation, a plurality of pieces of change configuration information are stored in advance on the online service device, and different pieces of change configuration information correspond to different voice packet identifiers. When the user uses a certain voice packet for navigation, the user selects a voice packet that he wishes to use (hereinafter referred to as a voice packet to be broadcasted) on the terminal device. The terminal equipment determines the voice packet identifier of the voice packet selected by the user and sends a configuration information acquisition request comprising the voice packet identifier to the online service equipment; the online service equipment responds to the acquisition request, inquires change configuration information corresponding to the voice packet identifier, and sends the inquired change configuration information of the voice packet to be broadcasted to the terminal equipment, so that the terminal equipment receives the change configuration information of the voice packet to be broadcasted, which is fed back by the online service equipment.
S120, changing the original navigation information of the changeable scene in the voice packet to be broadcasted according to the changing configuration information so as to obtain the changed navigation information of the changeable scene.
The essence of this step is to change the original navigation information by using the change content according to the preset change mode of each changeable scene in the change configuration information, so as to obtain the changed navigation information of the changeable scene.
If the changing content is text information, the original navigation information of the text version needs to be changed according to the changing mode when the changing is carried out, the changed navigation information of the text version is obtained, and then the changed navigation information of the text version is converted into the changed navigation information of the audio version. In the process of converting the text version of post-change navigation phone information into the audio version of post-change navigation phone information, tone characteristic information of a person corresponding to a voice packet needs to be added.
If the change content is audio information, the original navigation information of the audio version needs to be changed according to the change mode when the change is performed, and the changed navigation information of the audio version is obtained.
The individual language features of different characters speaking are different, for example, the idioms and/or language organization used by different characters during the process of speaking are different. According to the technical scheme, the change configuration information of the voice packet to be broadcasted is obtained through setting, the voice packet to be broadcasted comprises the original navigation voice information of each scene, at least one scene is a changeable scene, the change configuration information comprises the change content of the original navigation voice information of each changeable scene, and a preset change mode is set for each changeable scene; according to the configuration information, the original navigation phone information of the changeable scene in the voice packet to be broadcasted is changed to obtain the navigation phone information of the changeable scene after the change, so that the original navigation phone information in the voice packet to be broadcasted can be subjected to individual change, the navigation phone information after the change has the individual language characteristics of the person corresponding to the voice packet, the purpose of displaying the individual language characteristics of the person corresponding to the voice packet is achieved, the interestingness of the voice broadcast function is improved, and the user experience is improved.
On the basis of the above technical solutions, if the execution main body of the navigation information processing method is a cloud device, after S120, the method further includes: and sending the navigation information after the scene change to the terminal equipment. The purpose of this arrangement is to facilitate navigation voice broadcast by the terminal device based on post-change navigation voice information of the changeable scene.
Further, the changing configuration information may further include at least one of a changing content broadcasting interval and a scene fatigue value, and the method may further include sending changed navigation phone information of the changeable scene to the terminal device, including: and transmitting the navigation information after the change of the changeable scene and at least one of the broadcasting interval of the change content of the changeable scene and the scene fatigue value of the changeable scene to the terminal equipment.
The content broadcasting interval is used for limiting the minimum time interval or the minimum time interval of the navigation operation information after the continuous broadcasting of the changed content broadcasting in the navigation process. The changeable scenes corresponding to the navigation voice information after the two changes which are continuously broadcasted can be the same or different. I.e. changing the content-datagram interval is for all scenes.
For example, if the content broadcast interval is changed to the minimum number of intervals, the content broadcast interval is 4 times. In the navigation process, the scenes to be broadcasted are arranged according to the broadcasting sequence, namely, a scene A1, a scene A2, a scene A3, a scene A4, a scene A5 and a scene A6. Among them, the scene A1, the scene A2, and the scene A6 are changeable scenes, and the scene A3, the scene A4, and the scene A5 are non-changeable scenes. If the post-change navigation session information corresponding to the scene A1 is broadcasted after the scene A1, the post-change navigation session information corresponding to the scene A2 is not broadcasted at the scene A2 due to the limitation of the broadcasting interval of the change content, but the post-change navigation session information corresponding to the scene A6 is broadcasted at the scene A6.
If the content broadcasting interval is 10 minutes, which means that the time t1 at which the post-change navigation information corresponding to the changeable scene a is broadcast is a certain time, the post-change navigation information corresponding to the changeable scene is not broadcast again in the time period (t 1, t1+10), even if the changeable scene is encountered again (whether or not the changeable scene is scene a), after the time t1+10, the post-change navigation information corresponding to the changeable scene is broadcast again.
Since the post-change navigational speech information is often longer than the original navigational speech information. If the navigation information after the change is frequently broadcast, the time is too long, so that part of the navigation information cannot be broadcast. In addition, frequent broadcasting of the altered content may also lead to user boredom.
The scene fatigue value is the maximum play frequency of navigation voice information after the pointer changes the same changeable scene. For example, the scene fatigue value set at the changeable scene a is 5 times, which means that the post-change navigation information corresponding to the changeable scene a has been broadcasted 5 times by the current time, and then the post-change navigation information corresponding to the changeable scene a will not be broadcasted again if the scene a is encountered again during driving.
The scene fatigue value of the changeable scene is set, so that the situation that users feel auditory fatigue due to the fact that one or a class of changed contents are repeatedly played too many times can be avoided.
Further, the navigation voice information processing method further comprises the following steps: and receiving the updated scene fatigue value of the changeable scene fed back by the terminal equipment, and stopping changing the original navigation information of the changeable scene according to the change configuration information if the scene fatigue value of the changeable scene meets the fatigue judgment threshold.
The fact that the scene fatigue value of a certain changeable scene meets the fatigue judgment threshold value means that the broadcasting frequency of the navigation information after the change of the changeable scene is larger than the upper limit, if the navigation information after the change of the changeable scene is continuously broadcasted, the hearing fatigue of a user can be caused, and the broadcasting of the navigation information after the change of the changeable scene needs to be stopped so as to relieve the hearing fatigue of the user. Since the broadcasting of the post-change navigation information of the changeable scene is stopped, the original navigation information of the changeable scene is not required to be changed, and therefore the original navigation information of the changeable scene is stopped to be changed according to the change configuration information.
The specific content of the fatigue determination threshold is not limited in this application. In one embodiment, the fatigue determination threshold is optionally set to 0.
Based on this, optionally, the execution body of the navigation information processing method may be set as a cloud device, and the change configuration information further includes a scene fatigue value, and after S120, the changed navigation information of the changeable scene and the scene fatigue value of the changeable scene are sent to the terminal device. After the terminal device performs navigation voice broadcasting on the navigation voice broadcasting information after the scene is changed, the terminal device performs one-step subtracting operation on the scene fatigue value of the scene which can be changed, and sends the updated scene fatigue value of the scene which can be changed to the cloud device. And after receiving the updated scene fatigue value of the changeable scene fed back by the terminal equipment, the cloud equipment judges whether the updated scene fatigue value of the changeable scene meets the fatigue judgment threshold. And if the updated scene fatigue value of the changeable scene meets the fatigue judgment threshold, stopping changing the original navigation information of the changeable scene according to the change configuration information.
Based on the above technical solutions, optionally, if the execution main body of the navigation session information processing method is cloud equipment, the method further includes: and receiving indication information of the navigation phone information after the change of the changeable scene, which is fed back by the terminal equipment. The purpose of this is to protect the user from speaking "no" by not providing the user with post-change navigational session information after the user feedback dislikes it.
Optionally, after receiving the instruction information of the post-change navigation information of the changeable scene, which is fed back by the terminal device, the original navigation information of the changeable scene is stopped being changed according to the change configuration information. And when the follow-up terminal equipment performs voice broadcasting, performing voice broadcasting by using the original navigation voice information.
Fig. 5 is a flowchart of a navigation voice broadcasting method provided in an embodiment of the present disclosure. The navigation voice broadcasting method can be executed by a navigation voice broadcasting device, the device can be a part of an electronic map, can be realized by software and/or hardware, and can be integrated on any electronic equipment with computing capability, such as a terminal device. The terminal equipment can specifically comprise vehicle-mounted equipment or a mobile terminal. Referring to fig. 5, the navigation voice broadcasting method includes:
S210, receiving navigation information after changing of a changeable scene, wherein the navigation information after changing is obtained by changing the navigation information of the original changeable scene in a voice packet to be broadcasted according to changing configuration information; the voice package to be broadcasted comprises original navigation information of each scene, and at least one scene is a changeable scene; the change configuration information includes change contents of original navigation information for each changeable scene, and a change pattern preset for each changeable scene.
S220, performing navigation voice broadcasting according to the voice packet to be broadcasted and the navigation voice operation information after the scene change.
In the process of actually performing navigation voice broadcasting, two situations exist. In the first case, the scene to be broadcasted is not a changeable scene. In this case, since the modified navigation information is not created, the scene to be broadcasted corresponds to only the original navigation information. When the voice broadcasting is performed, the original navigation information is required to be used for voice broadcasting. In the second case, the scene to be broadcasted is a changeable scene. In this case, the scene to be broadcasted corresponds to both the original navigation information and the post-change navigation information. When the voice broadcasting is performed, one of the original navigation voice information and the changed navigation voice information is required to be selected for voice broadcasting.
There are various ways to implement this step, and this application is not limited thereto. Illustratively, the method for implementing the step includes: according to the scene priority information and/or scene occurrence positions, playing and sorting are carried out, and broadcasting intervals of all scenes in a broadcasting voice pool are determined, wherein the scenes in the broadcasting voice pool comprise changeable scenes; and selecting original navigation voice information of the changeable scene or navigation voice information after the change in the voice packet to be broadcasted according to the broadcasting intervals of the scenes in the broadcasting voice pool aiming at the changeable scene.
The scene priority information is information describing which scene corresponds to the navigation information to play when two scenes correspond to a certain place. Alternatively, the navigation information corresponding to the scene with higher scene priority information will be played preferentially. In an exemplary embodiment, the scene priority of the congested road is lower than the scene priority of the gateway, if a certain location corresponds to two scenes of the congested road and the gateway, the navigation information corresponding to the gateway is broadcasted first, and then the navigation information corresponding to the congested road is broadcasted.
The scene occurrence position refers to the geographic position of the place corresponding to the scene on the navigation route. The playing order according to the scene occurrence position means that the playing order of the navigation information of the scene is arranged according to the sequence of the vehicles reaching the corresponding places of the scene along the navigation route. For example, if a certain navigation route is followed, the navigation route passes through the intersection C and then passes through the intersection B, the navigation information corresponding to the intersection C is arranged first, and then the navigation information corresponding to the intersection B is arranged.
The broadcast voice pool is used for accommodating the collection of the navigation voice operation information which needs to be broadcast in the navigation. In practice, the voice packet includes navigation information for a plurality of scenes. In the actual navigation process, only partial scenes are involved, and the navigation speech information of the scenes involved in the navigation is required to be added into the broadcasting voice pool for arrangement. The scenes in the broadcast voice pool comprise changeable scenes and non-changeable scenes. And broadcasting according to the arrangement sequence of the navigation operation information in the broadcasting voice pool when the voice broadcasting is performed.
The broadcasting interval of each scene in the broadcasting voice pool refers to the broadcasting duration reserved for each scene in the broadcasting voice pool.
The method comprises the steps of selecting original navigation voice information of a changeable scene or post-change navigation voice information in a voice packet to be broadcasted to carry out voice broadcasting according to broadcasting intervals of all scenes in a broadcasting voice pool aiming at the changeable scene, wherein the voice broadcasting is carried out by indicating broadcasting duration reserved for the changeable scene and selecting one of the original navigation voice information and the post-change navigation voice information. In the selection process, the importance of broadcasting original navigation information or broadcasting the integrity of the navigation information after the change is considered.
In one embodiment, according to a broadcasting interval of each scene in a broadcasting voice pool, selecting original navigation voice information of a changeable scene or navigation voice information after change in a voice packet to be broadcasted to perform voice broadcasting, including: if the broadcasting time required by the navigation voice information after the change of the changeable scene is longer than the broadcasting interval determined for the changeable scene in the broadcasting voice pool, the original navigation voice information of the changeable scene in the voice packet to be broadcasted is used for navigation voice broadcasting.
The broadcasting time required by the navigation information after the change of the changeable scene is longer than the broadcasting interval determined for the changeable scene in the broadcasting voice pool, which means that the broadcasting time required by the navigation information after the change is longer than the broadcasting time reserved for the changeable scene, and if the navigation information after the change is selected for navigation voice broadcasting, the navigation information after the change cannot be completely broadcast.
Further, if the broadcasting duration required by the post-change navigation voice information of the changeable scene is less than or equal to the broadcasting interval determined for the changeable scene in the broadcasting voice pool, the post-change navigation voice information of the changeable scene in the voice packet to be broadcasted is used for navigation voice broadcasting.
In other words, when the broadcasting is completed, the post-change navigation voice information is preferentially selected to perform navigation voice broadcasting. And under the condition that the broadcasting can not be finished, selecting the original navigation voice operation information to carry out navigation voice broadcasting. Thus, the integrity of the navigation voice broadcast can be ensured.
According to the technical scheme, the navigation information after the change of the changeable scene is received is obtained by changing the navigation information of the original changeable scene in the voice packet to be broadcasted according to the change configuration information; the voice package to be broadcasted comprises original navigation information of each scene, and at least one scene is a changeable scene; the change configuration information includes change contents of original navigation information for each changeable scene, and a change mode preset for each changeable scene; according to the voice package to be broadcasted and the navigation voice broadcasting information after the scene change, the broadcasted navigation voice broadcasting information can have the personal language characteristics of the person corresponding to the voice package, so that the purpose of displaying the personal language characteristics of the person corresponding to the voice package is achieved, the interestingness of the voice broadcasting function is improved, and the user experience is improved.
On the basis of the above technical solutions, optionally, the navigation voice broadcasting method includes: receiving at least one of a changing content broadcasting interval of a changeable scene and a scene fatigue value of the changeable scene; s220 includes: for the changeable scene, according to at least one of the changing content broadcasting interval of the changeable scene and the scene fatigue value of the changeable scene, the original navigation voice information or the changed navigation voice information of the changeable scene in the voice packet to be broadcasted is selected for navigation voice broadcasting.
Taking "original navigation voice information of a changeable scene in a voice packet to be broadcasted or navigation voice broadcast of navigation voice information after being changed" as an example, according to a changing content broadcast interval of the changeable scene, the description is given.
The content broadcasting interval is used for limiting the minimum time interval or the minimum time interval of the navigation operation information after the continuous broadcasting of the changed content broadcasting in the navigation process. The changeable scenes corresponding to the navigation voice information after the two changes which are continuously broadcasted can be the same or different. I.e. changing the content-datagram interval is for all scenes.
For example, if the content broadcast interval is changed to the minimum number of intervals, the content broadcast interval is 4 times. In the navigation process, the scenes to be broadcasted are arranged according to the broadcasting sequence, namely, a scene A1, a scene A2, a scene A3, a scene A4, a scene A5 and a scene A6. Among them, the scene A1, the scene A2, and the scene A6 are changeable scenes, and the scene A3, the scene A4, and the scene A5 are non-changeable scenes. If the post-change navigation information corresponding to the scene A1 is broadcasted after the scene A1, the post-change navigation information corresponding to the scene A2 is not broadcasted at the scene A2 due to the limitation of the broadcasting interval of the change content, but the original navigation information corresponding to the scene A2 is broadcasted. Since the scene A3, the scene A4, and the scene A5 are non-changeable scenes, there is no post-change navigation information corresponding thereto. And broadcasting original navigation voice information corresponding to the scene A3 in the scene A3. Original navigation information corresponding to the scene A4 is broadcasted at the scene A4. And broadcasting original navigation voice information corresponding to the scene A5 in the scene A5. Since the interval between the scene A6 and the scene A1 is greater than 4 (i.e., the change content broadcasting interval), the post-change navigation phone information corresponding to the scene A6 is broadcast in the scene A6.
If the content broadcasting interval is 10 minutes, which means that the time t1 at which the post-change navigation information corresponding to the changeable scene a is broadcast is a certain time, the post-change navigation information corresponding to the changeable scene is not broadcast again but is broadcast after the time t1+10, even if the changeable scene is encountered again (whether or not the changeable scene is the scene a), the post-change navigation information corresponding to the changeable scene can be broadcast again in the time period (t 1, t1+10).
Since the post-change navigational speech information is often longer than the original navigational speech information. If the navigation information after the change is frequently broadcast, the time is too long, so that part of the navigation information cannot be broadcast. In addition, frequent broadcasting of the altered content may also lead to user boredom.
Further, under the condition that original navigation voice operation information or post-change navigation voice operation information of a changeable scene in a voice packet to be broadcasted is selected according to the scene fatigue value of the changeable scene for the changeable scene, if the post-change navigation voice operation information of the changeable scene is selected for navigation voice broadcasting every time, the scene fatigue value of the changeable scene is subtracted, and the updated scene fatigue value of the changeable scene is sent to the cloud device.
For example, it is assumed that the scene fatigue value of the changeable scene D in the change arrangement information is 10, and the fatigue determination threshold is whether the scene fatigue value of the changeable scene is 0. The cloud end device sends the scene fatigue value (namely 10) to the terminal device. And the terminal equipment performs navigation voice broadcasting on the navigation voice operation information after selecting and changing the changeable scene D each time, and then performs one subtracting operation on the scene fatigue value of the scene D.
Specifically, after the terminal device selects the post-change navigation voice information for the changeable scene D for navigation voice broadcasting for the first time, updating the scene fatigue value of the scene D, wherein the scene fatigue value of the updated scene D is 9, and transmitting the scene fatigue value (namely 9) to the cloud device; after the terminal equipment selects the changed navigation voice information for the second time for the changeable scene D to carry out navigation voice broadcasting, updating the scene fatigue value of the scene D, wherein the scene fatigue value of the scene D after updating is 8, and sending the scene fatigue value (namely 8) to the cloud equipment; … …; and the terminal equipment performs navigation voice broadcasting on the navigation voice operation information after the scene D is selected to be changed for the tenth time, updates the scene fatigue value of the scene D, and sends the scene fatigue value (namely 0) to the cloud equipment after updating the scene fatigue value of the scene D to 0.
And after the cloud end equipment receives the updated scene fatigue degree value of the changeable scene fed back by the terminal equipment each time, judging whether the updated scene fatigue degree value of the changeable scene is 0, and if the updated scene fatigue degree value of the changeable scene is 0, stopping changing the original navigation information of the changeable scene according to the change configuration information.
On the basis of the technical schemes, optionally, acquiring indication information of navigation information after the user refuses to use the changeable scene; and feeding back indication information of the navigation operation information after the change, which refuses to use the changeable scene, to the cloud device. The purpose of this arrangement is to protect the user from "no" rights by not providing post-change navigational session information to the user when the user feedback does not like it.
Based on the same inventive concept, the embodiment of the disclosure also provides a navigation phone information processing device, which can execute the steps of any navigation phone information processing method provided by the embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method. The navigation information processing device can be realized by adopting software and/or hardware, and can be integrated on any terminal equipment or cloud equipment with computing capability.
Fig. 6 is a schematic structural diagram of a navigation information processing device according to an embodiment of the present disclosure. Referring to fig. 6, the navigation information processing apparatus includes:
the obtaining module 310 is configured to obtain change configuration information of a voice packet to be broadcasted, where the voice packet to be broadcasted includes original navigation information of each scene, and at least one scene is a changeable scene, and the change configuration information includes change contents of the original navigation information for each changeable scene, and a preset change manner for each changeable scene;
and a changing module 320, configured to change the original navigation phone information of the changeable scene in the voice packet to be broadcasted according to the change configuration information, so as to obtain the changed navigation phone information of the changeable scene.
Further, the preset changing mode comprises at least one of front end hanging, rear end hanging, whole sentence replacement, phrase replacement and global replacement.
Further, the obtaining module 310 is configured to:
acquiring a voice packet identifier of the voice packet to be broadcasted, which is sent by a terminal device;
sending a configuration information acquisition request comprising the voice packet identifier to online service equipment;
And receiving the change configuration information of the voice packet to be broadcasted, which is fed back by the online service equipment.
Further, the device also comprises a first sending module, wherein the first sending module is used for:
and sending the navigation conversation information after the change of the changeable scene to a terminal device.
Further, the changing configuration information further includes at least one of changing a content broadcasting interval and a scene fatigue degree, and the first sending module is configured to:
and transmitting the navigation conversation information after the change of the changeable scene and at least one of the content broadcasting interval of the change of the changeable scene and the scene fatigue value of the changeable scene to terminal equipment.
Further, the apparatus further comprises a first receiving module, where the first receiving module is configured to:
and receiving the updated scene fatigue degree value of the changeable scene fed back by the terminal equipment, and stopping changing the original navigation information of the changeable scene according to the change configuration information if the scene fatigue degree value of the changeable scene meets a fatigue degree judgment threshold value.
Further, the first receiving module is further configured to:
and receiving indication information of the navigation information after the change of the changeable scene, which is fed back by the terminal equipment and refuses to use the navigation information.
Based on the same inventive concept, the embodiment of the disclosure also provides a navigation voice broadcasting device, which can execute the steps of any one of the navigation voice broadcasting methods provided by the embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method. The apparatus may be implemented in software and/or hardware and may be integrated on any terminal device having computing capabilities.
Fig. 7 is a schematic structural diagram of a navigation voice broadcasting device according to an embodiment of the present disclosure. Referring to fig. 7, the navigation voice broadcasting apparatus includes:
the receiving module 410 is configured to receive post-change navigation voice information of a changeable scene, where the post-change navigation voice information is obtained by changing the navigation voice information of an original changeable scene in a voice packet to be broadcasted according to the change configuration information; the voice package to be broadcasted comprises original navigation information of each scene, and at least one scene is a changeable scene; the change configuration information comprises change contents of original navigation voice information of each changeable scene and a preset change mode for each changeable scene;
and the broadcasting module 420 is used for performing navigation voice broadcasting according to the voice packet to be broadcasted and the navigation voice operation information after the scene change.
Further, the broadcasting module 420 is configured to:
according to the scene priority information and/or scene occurrence positions, playing and sorting are carried out, and broadcasting intervals of all scenes in a broadcasting voice pool are determined, wherein the scenes in the broadcasting voice pool comprise the changeable scenes;
and selecting original navigation voice information or post-change navigation voice information of the changeable scene in the voice packet to be broadcasted of the changeable scene according to the broadcasting interval of each scene in the broadcasting voice pool aiming at the changeable scene.
Further, the broadcasting module 420 is configured to:
and if the broadcasting time required by the navigation voice information after the change of the changeable scene is longer than the broadcasting interval determined for the changeable scene in the broadcasting voice pool, performing navigation voice broadcasting by using the original navigation voice information of the changeable scene in the voice packet to be broadcasted.
Further, the receiving module 410 is further configured to receive at least one of a changed content broadcast interval of a changeable scene and a scene fatigue value of the changeable scene;
a broadcasting module 420, configured to:
and selecting original navigation voice information or post-change navigation voice information of the changeable scene in the voice packet to be broadcasted according to at least one of the changeable scene changing content broadcasting interval and the changeable scene fatigue value for the changeable scene for navigation voice broadcasting.
Further, the broadcasting module 420 is configured to:
and if the navigation voice broadcasting is performed on the navigation voice broadcasting information after the change of the changeable scene is selected once, performing one-step subtracting operation on the scene fatigue value of the changeable scene, and transmitting the updated scene fatigue value of the changeable scene to the cloud device.
Further, the apparatus also includes a transfer module for:
acquiring indication information of navigation phone information after the user refuses to use the changeable scene;
and feeding back the indication information of the navigation information after the change of the changeable scene to the cloud device.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure, which is used to exemplarily illustrate an electronic device implementing any one of the navigation voice information processing method or the navigation voice broadcasting method in the embodiment of the present disclosure, and should not be construed as specifically limiting the embodiment of the present disclosure.
As shown in fig. 8, the electronic device 700 may include a processor (e.g., a central processing unit, a graphics processor, etc.) 701, which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the electronic device 700 are also stored. The processor 701, the ROM 702, and the RAM703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While an electronic device 700 is shown having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. When the computer program is executed by the processor 701, any of the functions defined in the navigation microphone information processing method or the navigation voice broadcasting method provided by the embodiments of the present disclosure may be executed.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the client, server, may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring change configuration information of a voice packet to be broadcasted, wherein the voice packet to be broadcasted comprises original navigation information of each scene, at least one scene is a changeable scene, the change configuration information comprises change contents of the original navigation information of each changeable scene, and a preset change mode for each changeable scene;
And changing the original navigation information of the changeable scene in the voice packet to be broadcasted according to the changing configuration information so as to obtain the changed navigation information of the changeable scene.
Or alternatively, the process may be performed,
receiving navigation information after changing of a changeable scene, wherein the navigation information after changing is obtained by changing the navigation information of the original changeable scene in a voice packet to be broadcasted according to changing configuration information; the voice package to be broadcasted comprises original navigation information of each scene, and at least one scene is a changeable scene; the change configuration information comprises change contents of original navigation voice information of each changeable scene and a preset change mode for each changeable scene;
and performing navigation voice broadcasting according to the voice packet to be broadcasted and the navigation voice operation information after the scene change.
In an embodiment of the present disclosure, computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a computer-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer-readable storage medium would include one or more wire-based electrical connections, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (17)

1. A navigation speech information processing method, comprising:
acquiring change configuration information of a voice packet to be broadcasted, wherein the voice packet to be broadcasted comprises original navigation information of each scene, at least one scene is a changeable scene, the change configuration information comprises change contents of the original navigation information of each changeable scene, and a preset change mode for each changeable scene;
and changing the original navigation information of the changeable scene in the voice packet to be broadcasted according to the changing configuration information so as to obtain the changed navigation information of the changeable scene.
2. The method of claim 1, wherein the predetermined modification comprises at least one of front-end hooking, back-end hooking, whole sentence replacement, phrase replacement, and global replacement.
3. The method of claim 1, wherein the obtaining the change configuration information of the voice packet to be broadcasted comprises:
acquiring a voice packet identifier of the voice packet to be broadcasted, which is sent by a terminal device;
sending a configuration information acquisition request comprising the voice packet identifier to online service equipment;
and receiving the change configuration information of the voice packet to be broadcasted, which is fed back by the online service equipment.
4. The method of claim 1, further comprising:
and sending the navigation conversation information after the change of the changeable scene to a terminal device.
5. The method of claim 4, wherein the changing the configuration information further includes changing at least one of a content broadcasting interval and a scene fatigue value, and the sending the post-change navigation phone information of the changeable scene to the terminal device includes:
and transmitting the navigation conversation information after the change of the changeable scene and at least one of the content broadcasting interval of the change of the changeable scene and the scene fatigue value of the changeable scene to terminal equipment.
6. The method of claim 4, further comprising:
and receiving the updated scene fatigue degree value of the changeable scene fed back by the terminal equipment, and stopping changing the original navigation information of the changeable scene according to the change configuration information if the scene fatigue degree value of the changeable scene meets a fatigue degree judgment threshold value.
7. The method of claim 4, further comprising:
and receiving indication information of the navigation information after the change of the changeable scene, which is fed back by the terminal equipment and refuses to use the navigation information.
8. A navigation voice broadcasting method comprises the following steps:
receiving navigation information after changing of a changeable scene, wherein the navigation information after changing is obtained by changing the navigation information of the original changeable scene in a voice packet to be broadcasted according to changing configuration information; the voice package to be broadcasted comprises original navigation information of each scene, and at least one scene is a changeable scene; the change configuration information comprises change contents of original navigation voice information of each changeable scene and a preset change mode for each changeable scene;
and performing navigation voice broadcasting according to the voice packet to be broadcasted and the navigation voice operation information after the scene change.
9. The method of claim 8, wherein the navigating voice broadcasting according to the voice packet to be broadcasted and the post-change navigation phone information of the changeable scene comprises:
according to the scene priority information and/or scene occurrence positions, playing and sorting are carried out, and broadcasting intervals of all scenes in a broadcasting voice pool are determined, wherein the scenes in the broadcasting voice pool comprise the changeable scenes;
And selecting original navigation voice information or post-change navigation voice information of the changeable scene in the voice packet to be broadcasted of the changeable scene according to the broadcasting interval of each scene in the broadcasting voice pool aiming at the changeable scene.
10. The method of claim 9, wherein selecting the original navigation phone information or the post-change navigation phone information of the changeable scene in the voice packet to be broadcasted according to the broadcast interval of each scene in the broadcast voice pool comprises:
and if the broadcasting time required by the navigation voice information after the change of the changeable scene is longer than the broadcasting interval determined for the changeable scene in the broadcasting voice pool, performing navigation voice broadcasting by using the original navigation voice information of the changeable scene in the voice packet to be broadcasted.
11. The method of claim 8, further comprising:
receiving at least one of a changing content broadcasting interval of a changeable scene and a scene fatigue value of the changeable scene;
the navigation voice broadcasting is performed according to the voice packet to be broadcasted and the navigation voice operation information after the scene change, which comprises the following steps:
And selecting original navigation voice information or post-change navigation voice information of the changeable scene in the voice packet to be broadcasted according to at least one of the changeable scene changing content broadcasting interval and the changeable scene fatigue value for the changeable scene for navigation voice broadcasting.
12. The method of claim 11, wherein if the post-change navigation voice information of the changeable scene is selected for navigation voice broadcasting each time, subtracting one operation from the scene fatigue value of the changeable scene, and transmitting the updated scene fatigue value of the changeable scene to the cloud device.
13. The method of claim 8, further comprising:
acquiring indication information of navigation phone information after the user refuses to use the changeable scene;
and feeding back the indication information of the navigation information after the change of the changeable scene to the cloud device.
14. A navigation speech information processing apparatus comprising:
the voice broadcasting system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring change configuration information of a voice packet to be broadcasted, the voice packet to be broadcasted comprises original navigation voice information of each scene, at least one scene is a changeable scene, the change configuration information comprises change contents of the original navigation voice information of each changeable scene, and a preset change mode for each changeable scene;
And the changing module is used for changing the original navigation voice operation information of the changeable scene in the voice packet to be broadcasted according to the changing configuration information so as to obtain the navigation voice operation information after the change of the changeable scene.
15. A navigation voice broadcast device, comprising:
the receiving module is used for receiving the navigation information after the change of the changeable scene, wherein the navigation information after the change is obtained by changing the navigation information of the original changeable scene in the voice packet to be broadcasted according to the change configuration information; the voice package to be broadcasted comprises original navigation information of each scene, and at least one scene is a changeable scene; the change configuration information comprises change contents of original navigation voice information of each changeable scene and a preset change mode for each changeable scene;
and the broadcasting module is used for carrying out navigation voice broadcasting according to the voice packet to be broadcasted and the navigation voice operation information after the scene change.
16. An electronic device, comprising: a memory and a processor, wherein the memory is configured to store,
the memory is used for storing the processor executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the navigation voice information processing method according to any one of claims 1 to 7 or the navigation voice broadcasting method according to any one of claims 8 to 13.
17. A computer program product for executing the navigation voice information processing method according to any one of claims 1 to 7 or the navigation voice broadcasting method according to any one of claims 8 to 13.
CN202310121684.XA 2023-01-19 2023-01-19 Navigation voice information processing method, device, equipment, medium and product Pending CN116164772A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310121684.XA CN116164772A (en) 2023-01-19 2023-01-19 Navigation voice information processing method, device, equipment, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310121684.XA CN116164772A (en) 2023-01-19 2023-01-19 Navigation voice information processing method, device, equipment, medium and product

Publications (1)

Publication Number Publication Date
CN116164772A true CN116164772A (en) 2023-05-26

Family

ID=86411040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310121684.XA Pending CN116164772A (en) 2023-01-19 2023-01-19 Navigation voice information processing method, device, equipment, medium and product

Country Status (1)

Country Link
CN (1) CN116164772A (en)

Similar Documents

Publication Publication Date Title
US20210286587A1 (en) Audio Announcement Prioritization System
US9836486B2 (en) Point of interest database maintenance system
US8352539B2 (en) Content distributing system and content receiving and reproducing device
CN106643774B (en) Navigation route generation method and terminal
CN111735472A (en) Navigation audio playing method, device, equipment and computer storage medium
CN106128138B (en) Method and system for providing information based on driving conditions
CN113419697A (en) Screen projection method, screen projection device, electronic equipment, vehicle machine and screen projection system
CN108779987A (en) Communication terminal, server unit, route search system and computer program
CN109817214B (en) Interaction method and device applied to vehicle
US10394869B2 (en) Dynamically linking information in a network
CN111191850A (en) Data processing method, device and equipment
EP3082341A2 (en) Content recommendation device, method, and system
CN116164772A (en) Navigation voice information processing method, device, equipment, medium and product
CN112561583A (en) Interactive vehicle-mounted display method and device based on cloud big data service
US10169986B2 (en) Integration of personalized traffic information
CN111405477A (en) Route sharing method and device and related equipment
CN114822062A (en) Traffic station prompting method and device and storage medium
CN107480842A (en) One kind uses car Order splitting processing method and system
JPH11101652A (en) Electronic mail data receiver, electronic mail host apparatus, medium storing their program, and electronic mail system
CN111641693A (en) Session data processing method and device and electronic equipment
CN112235333B (en) Function package management method, device, equipment and storage medium
CN117290606A (en) Recommendation information display method, device, system, equipment and storage medium
JP6221534B2 (en) Information terminal, information providing system, destination setting method, and computer program
JP6221533B2 (en) Information terminal, information providing system, destination setting method, and computer program
JP7383026B2 (en) Navigation audio reproduction method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination